00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2418 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3679 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.153 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.153 The recommended git tool is: git 00:00:00.153 using credential 00000000-0000-0000-0000-000000000002 00:00:00.162 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.197 Fetching changes from the remote Git repository 00:00:00.200 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.228 Using shallow fetch with depth 1 00:00:00.228 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.228 > git --version # timeout=10 00:00:00.250 > git --version # 'git version 2.39.2' 00:00:00.250 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.264 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.264 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.866 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.876 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.886 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.886 > git config core.sparsecheckout # timeout=10 00:00:05.895 > git read-tree -mu HEAD # timeout=10 00:00:05.910 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.931 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.931 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.020 [Pipeline] Start of Pipeline 00:00:06.032 [Pipeline] library 00:00:06.033 Loading library shm_lib@master 00:00:06.033 Library shm_lib@master is cached. Copying from home. 00:00:06.047 [Pipeline] node 00:00:06.061 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.062 [Pipeline] { 00:00:06.073 [Pipeline] catchError 00:00:06.074 [Pipeline] { 00:00:06.083 [Pipeline] wrap 00:00:06.089 [Pipeline] { 00:00:06.095 [Pipeline] stage 00:00:06.096 [Pipeline] { (Prologue) 00:00:06.115 [Pipeline] echo 00:00:06.117 Node: VM-host-SM38 00:00:06.125 [Pipeline] cleanWs 00:00:06.136 [WS-CLEANUP] Deleting project workspace... 00:00:06.136 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.143 [WS-CLEANUP] done 00:00:06.331 [Pipeline] setCustomBuildProperty 00:00:06.403 [Pipeline] httpRequest 00:00:07.116 [Pipeline] echo 00:00:07.117 Sorcerer 10.211.164.20 is alive 00:00:07.123 [Pipeline] retry 00:00:07.124 [Pipeline] { 00:00:07.132 [Pipeline] httpRequest 00:00:07.136 HttpMethod: GET 00:00:07.137 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.137 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.149 Response Code: HTTP/1.1 200 OK 00:00:07.149 Success: Status code 200 is in the accepted range: 200,404 00:00:07.150 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.175 [Pipeline] } 00:00:12.195 [Pipeline] // retry 00:00:12.205 [Pipeline] sh 00:00:12.496 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.516 [Pipeline] httpRequest 00:00:12.924 [Pipeline] echo 00:00:12.926 Sorcerer 10.211.164.20 is alive 00:00:12.936 [Pipeline] retry 00:00:12.938 [Pipeline] { 00:00:12.953 [Pipeline] httpRequest 00:00:12.959 HttpMethod: GET 00:00:12.959 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:12.960 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:12.988 Response Code: HTTP/1.1 200 OK 00:00:12.989 Success: Status code 200 is in the accepted range: 200,404 00:00:12.989 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:17.287 [Pipeline] } 00:01:17.305 [Pipeline] // retry 00:01:17.313 [Pipeline] sh 00:01:17.602 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:20.914 [Pipeline] sh 00:01:21.197 + git -C spdk log --oneline -n5 00:01:21.197 c13c99a5e test: Various fixes for Fedora40 00:01:21.197 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:21.197 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:21.197 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:21.197 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:21.224 [Pipeline] writeFile 00:01:21.258 [Pipeline] sh 00:01:21.576 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:21.589 [Pipeline] sh 00:01:21.874 + cat autorun-spdk.conf 00:01:21.874 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.874 SPDK_TEST_NVME=1 00:01:21.874 SPDK_TEST_FTL=1 00:01:21.874 SPDK_TEST_ISAL=1 00:01:21.874 SPDK_RUN_ASAN=1 00:01:21.874 SPDK_RUN_UBSAN=1 00:01:21.874 SPDK_TEST_XNVME=1 00:01:21.874 SPDK_TEST_NVME_FDP=1 00:01:21.874 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.882 RUN_NIGHTLY=1 00:01:21.885 [Pipeline] } 00:01:21.902 [Pipeline] // stage 00:01:21.919 [Pipeline] stage 00:01:21.922 [Pipeline] { (Run VM) 00:01:21.934 [Pipeline] sh 00:01:22.219 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:22.219 + echo 'Start stage prepare_nvme.sh' 00:01:22.219 Start stage prepare_nvme.sh 00:01:22.219 + [[ -n 8 ]] 00:01:22.219 + disk_prefix=ex8 00:01:22.219 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:22.219 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:22.219 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:22.219 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.219 ++ SPDK_TEST_NVME=1 00:01:22.219 ++ SPDK_TEST_FTL=1 00:01:22.219 ++ SPDK_TEST_ISAL=1 00:01:22.219 ++ SPDK_RUN_ASAN=1 00:01:22.219 ++ SPDK_RUN_UBSAN=1 00:01:22.219 ++ SPDK_TEST_XNVME=1 00:01:22.219 ++ SPDK_TEST_NVME_FDP=1 00:01:22.219 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.219 ++ RUN_NIGHTLY=1 00:01:22.219 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:22.219 + nvme_files=() 00:01:22.219 + declare -A nvme_files 00:01:22.219 + backend_dir=/var/lib/libvirt/images/backends 00:01:22.219 + nvme_files['nvme.img']=5G 00:01:22.219 + nvme_files['nvme-cmb.img']=5G 00:01:22.219 + nvme_files['nvme-multi0.img']=4G 00:01:22.219 + nvme_files['nvme-multi1.img']=4G 00:01:22.219 + nvme_files['nvme-multi2.img']=4G 00:01:22.219 + nvme_files['nvme-openstack.img']=8G 00:01:22.219 + nvme_files['nvme-zns.img']=5G 00:01:22.219 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:22.219 + (( SPDK_TEST_FTL == 1 )) 00:01:22.219 + nvme_files["nvme-ftl.img"]=6G 00:01:22.219 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:22.219 + nvme_files["nvme-fdp.img"]=1G 00:01:22.219 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:22.219 + for nvme in "${!nvme_files[@]}" 00:01:22.219 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:01:22.219 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.219 + for nvme in "${!nvme_files[@]}" 00:01:22.219 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:01:23.166 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:23.166 + for nvme in "${!nvme_files[@]}" 00:01:23.166 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:01:23.166 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.166 + for nvme in "${!nvme_files[@]}" 00:01:23.166 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:01:23.166 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:23.166 + for nvme in "${!nvme_files[@]}" 00:01:23.166 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:23.166 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.166 + for nvme in "${!nvme_files[@]}" 00:01:23.166 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:23.166 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.427 + for nvme in "${!nvme_files[@]}" 00:01:23.427 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:24.000 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:24.000 + for nvme in "${!nvme_files[@]}" 00:01:24.000 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:01:24.000 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:24.000 + for nvme in "${!nvme_files[@]}" 00:01:24.000 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:24.572 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:24.572 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:24.572 + echo 'End stage prepare_nvme.sh' 00:01:24.572 End stage prepare_nvme.sh 00:01:24.585 [Pipeline] sh 00:01:24.868 + DISTRO=fedora39 00:01:24.868 + CPUS=10 00:01:24.868 + RAM=12288 00:01:24.868 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:24.869 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:24.869 00:01:24.869 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:24.869 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:24.869 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:24.869 HELP=0 00:01:24.869 DRY_RUN=0 00:01:24.869 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:01:24.869 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:24.869 NVME_AUTO_CREATE=0 00:01:24.869 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:01:24.869 NVME_CMB=,,,, 00:01:24.869 NVME_PMR=,,,, 00:01:24.869 NVME_ZNS=,,,, 00:01:24.869 NVME_MS=true,,,, 00:01:24.869 NVME_FDP=,,,on, 00:01:24.869 SPDK_VAGRANT_DISTRO=fedora39 00:01:24.869 SPDK_VAGRANT_VMCPU=10 00:01:24.869 SPDK_VAGRANT_VMRAM=12288 00:01:24.869 SPDK_VAGRANT_PROVIDER=libvirt 00:01:24.869 SPDK_VAGRANT_HTTP_PROXY= 00:01:24.869 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:24.869 SPDK_OPENSTACK_NETWORK=0 00:01:24.869 VAGRANT_PACKAGE_BOX=0 00:01:24.869 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:24.869 FORCE_DISTRO=true 00:01:24.869 VAGRANT_BOX_VERSION= 00:01:24.869 EXTRA_VAGRANTFILES= 00:01:24.869 NIC_MODEL=e1000 00:01:24.869 00:01:24.869 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:24.869 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:27.413 Bringing machine 'default' up with 'libvirt' provider... 00:01:27.986 ==> default: Creating image (snapshot of base box volume). 00:01:27.986 ==> default: Creating domain with the following settings... 00:01:27.986 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732895018_827914496fc72fb47252 00:01:27.986 ==> default: -- Domain type: kvm 00:01:27.986 ==> default: -- Cpus: 10 00:01:27.986 ==> default: -- Feature: acpi 00:01:27.986 ==> default: -- Feature: apic 00:01:27.986 ==> default: -- Feature: pae 00:01:27.986 ==> default: -- Memory: 12288M 00:01:27.986 ==> default: -- Memory Backing: hugepages: 00:01:27.986 ==> default: -- Management MAC: 00:01:27.986 ==> default: -- Loader: 00:01:27.986 ==> default: -- Nvram: 00:01:27.986 ==> default: -- Base box: spdk/fedora39 00:01:27.986 ==> default: -- Storage pool: default 00:01:27.986 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732895018_827914496fc72fb47252.img (20G) 00:01:27.986 ==> default: -- Volume Cache: default 00:01:27.986 ==> default: -- Kernel: 00:01:27.986 ==> default: -- Initrd: 00:01:27.986 ==> default: -- Graphics Type: vnc 00:01:27.986 ==> default: -- Graphics Port: -1 00:01:27.986 ==> default: -- Graphics IP: 127.0.0.1 00:01:27.986 ==> default: -- Graphics Password: Not defined 00:01:27.986 ==> default: -- Video Type: cirrus 00:01:27.986 ==> default: -- Video VRAM: 9216 00:01:27.986 ==> default: -- Sound Type: 00:01:27.986 ==> default: -- Keymap: en-us 00:01:27.986 ==> default: -- TPM Path: 00:01:27.986 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:27.986 ==> default: -- Command line args: 00:01:27.986 ==> default: -> value=-device, 00:01:27.986 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:27.986 ==> default: -> value=-drive, 00:01:27.986 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:27.986 ==> default: -> value=-device, 00:01:27.986 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:27.986 ==> default: -> value=-device, 00:01:27.986 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:27.986 ==> default: -> value=-drive, 00:01:27.986 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:01:27.986 ==> default: -> value=-device, 00:01:27.986 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.986 ==> default: -> value=-device, 00:01:27.986 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:27.986 ==> default: -> value=-drive, 00:01:27.986 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:27.986 ==> default: -> value=-device, 00:01:27.986 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.986 ==> default: -> value=-drive, 00:01:27.986 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:27.986 ==> default: -> value=-device, 00:01:27.986 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.986 ==> default: -> value=-drive, 00:01:27.986 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:27.986 ==> default: -> value=-device, 00:01:27.986 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.986 ==> default: -> value=-device, 00:01:27.986 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:27.986 ==> default: -> value=-device, 00:01:27.986 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:27.986 ==> default: -> value=-drive, 00:01:27.986 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:27.986 ==> default: -> value=-device, 00:01:27.986 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.986 ==> default: Creating shared folders metadata... 00:01:27.986 ==> default: Starting domain. 00:01:29.905 ==> default: Waiting for domain to get an IP address... 00:01:48.053 ==> default: Waiting for SSH to become available... 00:01:48.053 ==> default: Configuring and enabling network interfaces... 00:01:51.360 default: SSH address: 192.168.121.109:22 00:01:51.360 default: SSH username: vagrant 00:01:51.360 default: SSH auth method: private key 00:01:53.908 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:02.051 ==> default: Mounting SSHFS shared folder... 00:02:04.685 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:04.685 ==> default: Checking Mount.. 00:02:05.630 ==> default: Folder Successfully Mounted! 00:02:05.630 00:02:05.630 SUCCESS! 00:02:05.630 00:02:05.630 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:05.630 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:05.630 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:05.630 00:02:05.640 [Pipeline] } 00:02:05.655 [Pipeline] // stage 00:02:05.665 [Pipeline] dir 00:02:05.666 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:05.667 [Pipeline] { 00:02:05.680 [Pipeline] catchError 00:02:05.682 [Pipeline] { 00:02:05.695 [Pipeline] sh 00:02:05.979 + vagrant ssh-config --host vagrant 00:02:05.979 + sed -ne '/^Host/,$p' 00:02:05.979 + tee ssh_conf 00:02:09.284 Host vagrant 00:02:09.284 HostName 192.168.121.109 00:02:09.284 User vagrant 00:02:09.284 Port 22 00:02:09.284 UserKnownHostsFile /dev/null 00:02:09.284 StrictHostKeyChecking no 00:02:09.284 PasswordAuthentication no 00:02:09.284 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:09.284 IdentitiesOnly yes 00:02:09.284 LogLevel FATAL 00:02:09.284 ForwardAgent yes 00:02:09.284 ForwardX11 yes 00:02:09.284 00:02:09.300 [Pipeline] withEnv 00:02:09.303 [Pipeline] { 00:02:09.317 [Pipeline] sh 00:02:09.602 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:09.602 source /etc/os-release 00:02:09.602 [[ -e /image.version ]] && img=$(< /image.version) 00:02:09.602 # Minimal, systemd-like check. 00:02:09.602 if [[ -e /.dockerenv ]]; then 00:02:09.602 # Clear garbage from the node'\''s name: 00:02:09.602 # agt-er_autotest_547-896 -> autotest_547-896 00:02:09.602 # $HOSTNAME is the actual container id 00:02:09.602 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:09.602 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:09.602 # We can assume this is a mount from a host where container is running, 00:02:09.602 # so fetch its hostname to easily identify the target swarm worker. 00:02:09.602 container="$(< /etc/hostname) ($agent)" 00:02:09.602 else 00:02:09.602 # Fallback 00:02:09.602 container=$agent 00:02:09.602 fi 00:02:09.602 fi 00:02:09.602 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:09.602 ' 00:02:09.878 [Pipeline] } 00:02:09.899 [Pipeline] // withEnv 00:02:09.909 [Pipeline] setCustomBuildProperty 00:02:09.927 [Pipeline] stage 00:02:09.931 [Pipeline] { (Tests) 00:02:09.951 [Pipeline] sh 00:02:10.235 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:10.512 [Pipeline] sh 00:02:10.793 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:11.120 [Pipeline] timeout 00:02:11.120 Timeout set to expire in 50 min 00:02:11.122 [Pipeline] { 00:02:11.134 [Pipeline] sh 00:02:11.418 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:11.992 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:12.007 [Pipeline] sh 00:02:12.292 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:12.570 [Pipeline] sh 00:02:12.856 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:13.136 [Pipeline] sh 00:02:13.420 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:13.682 ++ readlink -f spdk_repo 00:02:13.682 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:13.682 + [[ -n /home/vagrant/spdk_repo ]] 00:02:13.682 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:13.682 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:13.682 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:13.682 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:13.682 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:13.682 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:13.682 + cd /home/vagrant/spdk_repo 00:02:13.682 + source /etc/os-release 00:02:13.682 ++ NAME='Fedora Linux' 00:02:13.682 ++ VERSION='39 (Cloud Edition)' 00:02:13.682 ++ ID=fedora 00:02:13.682 ++ VERSION_ID=39 00:02:13.682 ++ VERSION_CODENAME= 00:02:13.682 ++ PLATFORM_ID=platform:f39 00:02:13.682 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:13.682 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:13.682 ++ LOGO=fedora-logo-icon 00:02:13.682 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:13.682 ++ HOME_URL=https://fedoraproject.org/ 00:02:13.682 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:13.682 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:13.682 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:13.682 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:13.682 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:13.682 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:13.682 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:13.682 ++ SUPPORT_END=2024-11-12 00:02:13.682 ++ VARIANT='Cloud Edition' 00:02:13.682 ++ VARIANT_ID=cloud 00:02:13.682 + uname -a 00:02:13.682 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:13.682 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:13.682 Hugepages 00:02:13.682 node hugesize free / total 00:02:13.682 node0 1048576kB 0 / 0 00:02:13.682 node0 2048kB 0 / 0 00:02:13.682 00:02:13.682 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:13.682 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:13.682 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:13.942 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:13.942 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:13.942 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:13.942 + rm -f /tmp/spdk-ld-path 00:02:13.942 + source autorun-spdk.conf 00:02:13.942 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.942 ++ SPDK_TEST_NVME=1 00:02:13.942 ++ SPDK_TEST_FTL=1 00:02:13.942 ++ SPDK_TEST_ISAL=1 00:02:13.942 ++ SPDK_RUN_ASAN=1 00:02:13.942 ++ SPDK_RUN_UBSAN=1 00:02:13.942 ++ SPDK_TEST_XNVME=1 00:02:13.942 ++ SPDK_TEST_NVME_FDP=1 00:02:13.942 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:13.942 ++ RUN_NIGHTLY=1 00:02:13.942 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:13.942 + [[ -n '' ]] 00:02:13.942 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:13.942 + for M in /var/spdk/build-*-manifest.txt 00:02:13.942 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:13.942 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.942 + for M in /var/spdk/build-*-manifest.txt 00:02:13.942 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:13.942 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.942 + for M in /var/spdk/build-*-manifest.txt 00:02:13.942 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:13.942 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.942 ++ uname 00:02:13.942 + [[ Linux == \L\i\n\u\x ]] 00:02:13.942 + sudo dmesg -T 00:02:13.942 + sudo dmesg --clear 00:02:13.942 + dmesg_pid=4983 00:02:13.942 + [[ Fedora Linux == FreeBSD ]] 00:02:13.942 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.942 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.942 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:13.942 + [[ -x /usr/src/fio-static/fio ]] 00:02:13.942 + sudo dmesg -Tw 00:02:13.942 + export FIO_BIN=/usr/src/fio-static/fio 00:02:13.942 + FIO_BIN=/usr/src/fio-static/fio 00:02:13.942 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:13.942 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:13.942 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:13.942 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.942 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.942 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:13.942 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.942 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.942 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:13.942 Test configuration: 00:02:13.942 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.942 SPDK_TEST_NVME=1 00:02:13.942 SPDK_TEST_FTL=1 00:02:13.942 SPDK_TEST_ISAL=1 00:02:13.942 SPDK_RUN_ASAN=1 00:02:13.942 SPDK_RUN_UBSAN=1 00:02:13.942 SPDK_TEST_XNVME=1 00:02:13.942 SPDK_TEST_NVME_FDP=1 00:02:13.942 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.201 RUN_NIGHTLY=1 15:44:25 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:14.202 15:44:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:14.202 15:44:25 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:14.202 15:44:25 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:14.202 15:44:25 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:14.202 15:44:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.202 15:44:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.202 15:44:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.202 15:44:25 -- paths/export.sh@5 -- $ export PATH 00:02:14.202 15:44:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.202 15:44:25 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:14.202 15:44:25 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:14.202 15:44:25 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732895065.XXXXXX 00:02:14.202 15:44:25 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732895065.9AgUoi 00:02:14.202 15:44:25 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:14.202 15:44:25 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:02:14.202 15:44:25 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:14.202 15:44:25 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:14.202 15:44:25 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:14.202 15:44:25 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:14.202 15:44:25 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:14.202 15:44:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.202 15:44:25 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:14.202 15:44:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:14.202 15:44:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:14.202 15:44:25 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:14.202 15:44:25 -- spdk/autobuild.sh@16 -- $ date -u 00:02:14.202 Fri Nov 29 03:44:25 PM UTC 2024 00:02:14.202 15:44:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:14.202 LTS-67-gc13c99a5e 00:02:14.202 15:44:25 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:14.202 15:44:25 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:14.202 15:44:25 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:14.202 15:44:25 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:14.202 15:44:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.202 ************************************ 00:02:14.202 START TEST asan 00:02:14.202 ************************************ 00:02:14.202 using asan 00:02:14.202 15:44:25 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:02:14.202 00:02:14.202 real 0m0.000s 00:02:14.202 user 0m0.000s 00:02:14.202 sys 0m0.000s 00:02:14.202 ************************************ 00:02:14.202 END TEST asan 00:02:14.202 ************************************ 00:02:14.202 15:44:25 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:14.202 15:44:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.202 15:44:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:14.202 15:44:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:14.202 15:44:25 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:14.202 15:44:25 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:14.202 15:44:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.202 ************************************ 00:02:14.202 START TEST ubsan 00:02:14.202 ************************************ 00:02:14.202 using ubsan 00:02:14.202 15:44:25 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:14.202 00:02:14.202 real 0m0.000s 00:02:14.202 user 0m0.000s 00:02:14.202 sys 0m0.000s 00:02:14.202 ************************************ 00:02:14.202 END TEST ubsan 00:02:14.202 ************************************ 00:02:14.202 15:44:25 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:14.202 15:44:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.202 15:44:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:14.202 15:44:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:14.202 15:44:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:14.202 15:44:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:14.202 15:44:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:14.202 15:44:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:14.202 15:44:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:14.202 15:44:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:14.202 15:44:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:14.462 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:14.462 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:14.722 Using 'verbs' RDMA provider 00:02:27.893 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:37.867 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:37.867 Creating mk/config.mk...done. 00:02:37.867 Creating mk/cc.flags.mk...done. 00:02:37.867 Type 'make' to build. 00:02:37.867 15:44:48 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:37.867 15:44:48 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:37.867 15:44:48 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:37.867 15:44:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.867 ************************************ 00:02:37.867 START TEST make 00:02:37.867 ************************************ 00:02:37.867 15:44:48 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:37.867 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:37.867 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:37.867 meson setup builddir \ 00:02:37.867 -Dwith-libaio=enabled \ 00:02:37.867 -Dwith-liburing=enabled \ 00:02:37.867 -Dwith-libvfn=disabled \ 00:02:37.867 -Dwith-spdk=false && \ 00:02:37.867 meson compile -C builddir && \ 00:02:37.867 cd -) 00:02:37.867 make[1]: Nothing to be done for 'all'. 00:02:39.240 The Meson build system 00:02:39.240 Version: 1.5.0 00:02:39.240 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:39.240 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:39.240 Build type: native build 00:02:39.240 Project name: xnvme 00:02:39.240 Project version: 0.7.3 00:02:39.240 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:39.240 C linker for the host machine: cc ld.bfd 2.40-14 00:02:39.240 Host machine cpu family: x86_64 00:02:39.240 Host machine cpu: x86_64 00:02:39.240 Message: host_machine.system: linux 00:02:39.240 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:39.240 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:39.240 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:39.240 Run-time dependency threads found: YES 00:02:39.240 Has header "setupapi.h" : NO 00:02:39.240 Has header "linux/blkzoned.h" : YES 00:02:39.240 Has header "linux/blkzoned.h" : YES (cached) 00:02:39.240 Has header "libaio.h" : YES 00:02:39.240 Library aio found: YES 00:02:39.240 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:39.240 Run-time dependency liburing found: YES 2.2 00:02:39.240 Dependency libvfn skipped: feature with-libvfn disabled 00:02:39.240 Run-time dependency appleframeworks found: NO (tried framework) 00:02:39.240 Run-time dependency appleframeworks found: NO (tried framework) 00:02:39.241 Configuring xnvme_config.h using configuration 00:02:39.241 Configuring xnvme.spec using configuration 00:02:39.241 Run-time dependency bash-completion found: YES 2.11 00:02:39.241 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:39.241 Program cp found: YES (/usr/bin/cp) 00:02:39.241 Has header "winsock2.h" : NO 00:02:39.241 Has header "dbghelp.h" : NO 00:02:39.241 Library rpcrt4 found: NO 00:02:39.241 Library rt found: YES 00:02:39.241 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:39.241 Found CMake: /usr/bin/cmake (3.27.7) 00:02:39.241 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:39.241 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:39.241 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:39.241 Build targets in project: 32 00:02:39.241 00:02:39.241 xnvme 0.7.3 00:02:39.241 00:02:39.241 User defined options 00:02:39.241 with-libaio : enabled 00:02:39.241 with-liburing: enabled 00:02:39.241 with-libvfn : disabled 00:02:39.241 with-spdk : false 00:02:39.241 00:02:39.241 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.808 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:39.808 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:39.808 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:39.808 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:39.808 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:39.808 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:39.808 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:39.808 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:39.808 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:39.808 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:39.808 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:39.808 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:39.808 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:39.808 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:39.808 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:39.808 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:39.808 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:40.068 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:40.068 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:40.068 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:40.068 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:40.068 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:40.068 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:40.068 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:40.068 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:40.068 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:40.068 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:40.068 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:40.068 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:40.068 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:40.068 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:40.068 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:40.068 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:40.068 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:40.068 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:40.068 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:40.068 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:40.068 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:40.068 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:40.068 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:40.068 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:40.068 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:40.068 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:40.068 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:40.068 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:40.068 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:40.068 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:40.068 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:40.068 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:40.068 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:40.068 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:40.068 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:40.068 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:40.068 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:40.326 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:40.326 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:40.326 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:40.326 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:40.326 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:40.326 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:40.326 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:40.326 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:40.326 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:40.326 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:40.326 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:40.326 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:40.326 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:40.326 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:40.327 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:40.327 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:40.327 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:40.327 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:40.327 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:40.327 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:40.327 [74/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:40.327 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:40.327 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:40.586 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:40.586 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:40.586 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:40.586 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:40.586 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:40.586 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:40.586 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:40.586 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:40.586 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:40.586 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:40.586 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:40.586 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:40.586 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:40.586 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:40.586 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:40.586 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:40.586 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:40.586 [94/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:40.586 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:40.586 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:40.586 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:40.586 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:40.586 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:40.586 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:40.845 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:40.845 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:40.845 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:40.845 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:40.845 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:40.845 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:40.845 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:40.845 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:40.845 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:40.845 [110/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:40.845 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:40.845 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:40.845 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:40.845 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:40.845 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:40.845 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:40.845 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:40.845 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:40.845 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:40.845 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:40.845 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:40.845 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:40.845 [123/203] Linking target lib/libxnvme.so 00:02:40.845 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:40.845 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:40.845 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:40.845 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:40.845 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:40.845 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:40.845 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:40.845 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:40.845 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:40.845 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:40.845 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:40.845 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:40.845 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:40.845 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:41.103 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:41.103 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:41.103 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:41.103 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:41.103 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:41.103 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:41.103 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:41.103 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:41.103 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:41.103 [147/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:41.103 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:41.103 [149/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:41.103 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:41.103 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:41.103 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:41.103 [153/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:41.103 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:41.103 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:41.361 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:41.361 [157/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:41.361 [158/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:41.361 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:41.361 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:41.361 [161/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:41.361 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:41.361 [163/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:41.361 [164/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:41.361 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:41.361 [166/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:41.361 [167/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:41.361 [168/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:41.361 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:41.361 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:41.361 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:41.619 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:41.619 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:41.619 [174/203] Linking static target lib/libxnvme.a 00:02:41.619 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:41.619 [176/203] Linking target tests/xnvme_tests_buf 00:02:41.619 [177/203] Linking target tests/xnvme_tests_scc 00:02:41.619 [178/203] Linking target tests/xnvme_tests_enum 00:02:41.619 [179/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:41.619 [180/203] Linking target tests/xnvme_tests_znd_state 00:02:41.619 [181/203] Linking target tests/xnvme_tests_cli 00:02:41.619 [182/203] Linking target tests/xnvme_tests_znd_append 00:02:41.619 [183/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:41.619 [184/203] Linking target tests/xnvme_tests_lblk 00:02:41.619 [185/203] Linking target tests/xnvme_tests_xnvme_file 00:02:41.619 [186/203] Linking target tests/xnvme_tests_ioworker 00:02:41.619 [187/203] Linking target tools/xdd 00:02:41.619 [188/203] Linking target tests/xnvme_tests_kvs 00:02:41.619 [189/203] Linking target tests/xnvme_tests_map 00:02:41.619 [190/203] Linking target tools/kvs 00:02:41.619 [191/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:41.619 [192/203] Linking target tools/lblk 00:02:41.619 [193/203] Linking target tools/xnvme 00:02:41.619 [194/203] Linking target examples/xnvme_enum 00:02:41.619 [195/203] Linking target examples/xnvme_io_async 00:02:41.619 [196/203] Linking target tools/zoned 00:02:41.619 [197/203] Linking target examples/xnvme_hello 00:02:41.619 [198/203] Linking target examples/xnvme_dev 00:02:41.619 [199/203] Linking target tools/xnvme_file 00:02:41.619 [200/203] Linking target examples/xnvme_single_sync 00:02:41.619 [201/203] Linking target examples/xnvme_single_async 00:02:41.619 [202/203] Linking target examples/zoned_io_sync 00:02:41.619 [203/203] Linking target examples/zoned_io_async 00:02:41.619 INFO: autodetecting backend as ninja 00:02:41.619 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:41.877 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:45.166 The Meson build system 00:02:45.166 Version: 1.5.0 00:02:45.166 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:45.166 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:45.166 Build type: native build 00:02:45.166 Program cat found: YES (/usr/bin/cat) 00:02:45.166 Project name: DPDK 00:02:45.166 Project version: 23.11.0 00:02:45.166 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:45.166 C linker for the host machine: cc ld.bfd 2.40-14 00:02:45.166 Host machine cpu family: x86_64 00:02:45.166 Host machine cpu: x86_64 00:02:45.166 Message: ## Building in Developer Mode ## 00:02:45.166 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:45.166 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:45.166 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:45.166 Program python3 found: YES (/usr/bin/python3) 00:02:45.166 Program cat found: YES (/usr/bin/cat) 00:02:45.166 Compiler for C supports arguments -march=native: YES 00:02:45.166 Checking for size of "void *" : 8 00:02:45.166 Checking for size of "void *" : 8 (cached) 00:02:45.166 Library m found: YES 00:02:45.166 Library numa found: YES 00:02:45.166 Has header "numaif.h" : YES 00:02:45.166 Library fdt found: NO 00:02:45.166 Library execinfo found: NO 00:02:45.166 Has header "execinfo.h" : YES 00:02:45.166 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:45.166 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:45.166 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:45.166 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:45.166 Run-time dependency openssl found: YES 3.1.1 00:02:45.166 Run-time dependency libpcap found: YES 1.10.4 00:02:45.166 Has header "pcap.h" with dependency libpcap: YES 00:02:45.166 Compiler for C supports arguments -Wcast-qual: YES 00:02:45.166 Compiler for C supports arguments -Wdeprecated: YES 00:02:45.166 Compiler for C supports arguments -Wformat: YES 00:02:45.166 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:45.166 Compiler for C supports arguments -Wformat-security: NO 00:02:45.166 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:45.166 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:45.166 Compiler for C supports arguments -Wnested-externs: YES 00:02:45.166 Compiler for C supports arguments -Wold-style-definition: YES 00:02:45.166 Compiler for C supports arguments -Wpointer-arith: YES 00:02:45.166 Compiler for C supports arguments -Wsign-compare: YES 00:02:45.166 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:45.166 Compiler for C supports arguments -Wundef: YES 00:02:45.166 Compiler for C supports arguments -Wwrite-strings: YES 00:02:45.166 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:45.166 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:45.166 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:45.166 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:45.166 Program objdump found: YES (/usr/bin/objdump) 00:02:45.166 Compiler for C supports arguments -mavx512f: YES 00:02:45.166 Checking if "AVX512 checking" compiles: YES 00:02:45.166 Fetching value of define "__SSE4_2__" : 1 00:02:45.166 Fetching value of define "__AES__" : 1 00:02:45.166 Fetching value of define "__AVX__" : 1 00:02:45.166 Fetching value of define "__AVX2__" : 1 00:02:45.166 Fetching value of define "__AVX512BW__" : 1 00:02:45.166 Fetching value of define "__AVX512CD__" : 1 00:02:45.166 Fetching value of define "__AVX512DQ__" : 1 00:02:45.166 Fetching value of define "__AVX512F__" : 1 00:02:45.166 Fetching value of define "__AVX512VL__" : 1 00:02:45.166 Fetching value of define "__PCLMUL__" : 1 00:02:45.166 Fetching value of define "__RDRND__" : 1 00:02:45.166 Fetching value of define "__RDSEED__" : 1 00:02:45.166 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:45.166 Fetching value of define "__znver1__" : (undefined) 00:02:45.166 Fetching value of define "__znver2__" : (undefined) 00:02:45.166 Fetching value of define "__znver3__" : (undefined) 00:02:45.166 Fetching value of define "__znver4__" : (undefined) 00:02:45.166 Library asan found: YES 00:02:45.166 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:45.166 Message: lib/log: Defining dependency "log" 00:02:45.166 Message: lib/kvargs: Defining dependency "kvargs" 00:02:45.166 Message: lib/telemetry: Defining dependency "telemetry" 00:02:45.166 Library rt found: YES 00:02:45.166 Checking for function "getentropy" : NO 00:02:45.166 Message: lib/eal: Defining dependency "eal" 00:02:45.166 Message: lib/ring: Defining dependency "ring" 00:02:45.166 Message: lib/rcu: Defining dependency "rcu" 00:02:45.166 Message: lib/mempool: Defining dependency "mempool" 00:02:45.166 Message: lib/mbuf: Defining dependency "mbuf" 00:02:45.166 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:45.166 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:45.166 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:45.166 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:45.166 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:45.166 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:45.166 Compiler for C supports arguments -mpclmul: YES 00:02:45.166 Compiler for C supports arguments -maes: YES 00:02:45.166 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:45.166 Compiler for C supports arguments -mavx512bw: YES 00:02:45.166 Compiler for C supports arguments -mavx512dq: YES 00:02:45.166 Compiler for C supports arguments -mavx512vl: YES 00:02:45.166 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:45.166 Compiler for C supports arguments -mavx2: YES 00:02:45.166 Compiler for C supports arguments -mavx: YES 00:02:45.166 Message: lib/net: Defining dependency "net" 00:02:45.166 Message: lib/meter: Defining dependency "meter" 00:02:45.166 Message: lib/ethdev: Defining dependency "ethdev" 00:02:45.166 Message: lib/pci: Defining dependency "pci" 00:02:45.166 Message: lib/cmdline: Defining dependency "cmdline" 00:02:45.166 Message: lib/hash: Defining dependency "hash" 00:02:45.166 Message: lib/timer: Defining dependency "timer" 00:02:45.166 Message: lib/compressdev: Defining dependency "compressdev" 00:02:45.166 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:45.166 Message: lib/dmadev: Defining dependency "dmadev" 00:02:45.166 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:45.166 Message: lib/power: Defining dependency "power" 00:02:45.166 Message: lib/reorder: Defining dependency "reorder" 00:02:45.166 Message: lib/security: Defining dependency "security" 00:02:45.166 Has header "linux/userfaultfd.h" : YES 00:02:45.166 Has header "linux/vduse.h" : YES 00:02:45.166 Message: lib/vhost: Defining dependency "vhost" 00:02:45.166 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:45.166 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:45.166 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:45.166 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:45.166 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:45.166 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:45.166 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:45.166 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:45.166 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:45.166 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:45.166 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:45.166 Configuring doxy-api-html.conf using configuration 00:02:45.166 Configuring doxy-api-man.conf using configuration 00:02:45.166 Program mandb found: YES (/usr/bin/mandb) 00:02:45.166 Program sphinx-build found: NO 00:02:45.166 Configuring rte_build_config.h using configuration 00:02:45.166 Message: 00:02:45.166 ================= 00:02:45.166 Applications Enabled 00:02:45.166 ================= 00:02:45.166 00:02:45.166 apps: 00:02:45.166 00:02:45.166 00:02:45.166 Message: 00:02:45.166 ================= 00:02:45.166 Libraries Enabled 00:02:45.166 ================= 00:02:45.166 00:02:45.166 libs: 00:02:45.166 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:45.166 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:45.166 cryptodev, dmadev, power, reorder, security, vhost, 00:02:45.166 00:02:45.166 Message: 00:02:45.166 =============== 00:02:45.166 Drivers Enabled 00:02:45.166 =============== 00:02:45.166 00:02:45.166 common: 00:02:45.166 00:02:45.166 bus: 00:02:45.166 pci, vdev, 00:02:45.166 mempool: 00:02:45.166 ring, 00:02:45.166 dma: 00:02:45.166 00:02:45.166 net: 00:02:45.166 00:02:45.166 crypto: 00:02:45.166 00:02:45.166 compress: 00:02:45.166 00:02:45.166 vdpa: 00:02:45.166 00:02:45.166 00:02:45.166 Message: 00:02:45.166 ================= 00:02:45.166 Content Skipped 00:02:45.166 ================= 00:02:45.166 00:02:45.166 apps: 00:02:45.166 dumpcap: explicitly disabled via build config 00:02:45.166 graph: explicitly disabled via build config 00:02:45.166 pdump: explicitly disabled via build config 00:02:45.166 proc-info: explicitly disabled via build config 00:02:45.166 test-acl: explicitly disabled via build config 00:02:45.166 test-bbdev: explicitly disabled via build config 00:02:45.166 test-cmdline: explicitly disabled via build config 00:02:45.166 test-compress-perf: explicitly disabled via build config 00:02:45.166 test-crypto-perf: explicitly disabled via build config 00:02:45.166 test-dma-perf: explicitly disabled via build config 00:02:45.166 test-eventdev: explicitly disabled via build config 00:02:45.166 test-fib: explicitly disabled via build config 00:02:45.166 test-flow-perf: explicitly disabled via build config 00:02:45.166 test-gpudev: explicitly disabled via build config 00:02:45.167 test-mldev: explicitly disabled via build config 00:02:45.167 test-pipeline: explicitly disabled via build config 00:02:45.167 test-pmd: explicitly disabled via build config 00:02:45.167 test-regex: explicitly disabled via build config 00:02:45.167 test-sad: explicitly disabled via build config 00:02:45.167 test-security-perf: explicitly disabled via build config 00:02:45.167 00:02:45.167 libs: 00:02:45.167 metrics: explicitly disabled via build config 00:02:45.167 acl: explicitly disabled via build config 00:02:45.167 bbdev: explicitly disabled via build config 00:02:45.167 bitratestats: explicitly disabled via build config 00:02:45.167 bpf: explicitly disabled via build config 00:02:45.167 cfgfile: explicitly disabled via build config 00:02:45.167 distributor: explicitly disabled via build config 00:02:45.167 efd: explicitly disabled via build config 00:02:45.167 eventdev: explicitly disabled via build config 00:02:45.167 dispatcher: explicitly disabled via build config 00:02:45.167 gpudev: explicitly disabled via build config 00:02:45.167 gro: explicitly disabled via build config 00:02:45.167 gso: explicitly disabled via build config 00:02:45.167 ip_frag: explicitly disabled via build config 00:02:45.167 jobstats: explicitly disabled via build config 00:02:45.167 latencystats: explicitly disabled via build config 00:02:45.167 lpm: explicitly disabled via build config 00:02:45.167 member: explicitly disabled via build config 00:02:45.167 pcapng: explicitly disabled via build config 00:02:45.167 rawdev: explicitly disabled via build config 00:02:45.167 regexdev: explicitly disabled via build config 00:02:45.167 mldev: explicitly disabled via build config 00:02:45.167 rib: explicitly disabled via build config 00:02:45.167 sched: explicitly disabled via build config 00:02:45.167 stack: explicitly disabled via build config 00:02:45.167 ipsec: explicitly disabled via build config 00:02:45.167 pdcp: explicitly disabled via build config 00:02:45.167 fib: explicitly disabled via build config 00:02:45.167 port: explicitly disabled via build config 00:02:45.167 pdump: explicitly disabled via build config 00:02:45.167 table: explicitly disabled via build config 00:02:45.167 pipeline: explicitly disabled via build config 00:02:45.167 graph: explicitly disabled via build config 00:02:45.167 node: explicitly disabled via build config 00:02:45.167 00:02:45.167 drivers: 00:02:45.167 common/cpt: not in enabled drivers build config 00:02:45.167 common/dpaax: not in enabled drivers build config 00:02:45.167 common/iavf: not in enabled drivers build config 00:02:45.167 common/idpf: not in enabled drivers build config 00:02:45.167 common/mvep: not in enabled drivers build config 00:02:45.167 common/octeontx: not in enabled drivers build config 00:02:45.167 bus/auxiliary: not in enabled drivers build config 00:02:45.167 bus/cdx: not in enabled drivers build config 00:02:45.167 bus/dpaa: not in enabled drivers build config 00:02:45.167 bus/fslmc: not in enabled drivers build config 00:02:45.167 bus/ifpga: not in enabled drivers build config 00:02:45.167 bus/platform: not in enabled drivers build config 00:02:45.167 bus/vmbus: not in enabled drivers build config 00:02:45.167 common/cnxk: not in enabled drivers build config 00:02:45.167 common/mlx5: not in enabled drivers build config 00:02:45.167 common/nfp: not in enabled drivers build config 00:02:45.167 common/qat: not in enabled drivers build config 00:02:45.167 common/sfc_efx: not in enabled drivers build config 00:02:45.167 mempool/bucket: not in enabled drivers build config 00:02:45.167 mempool/cnxk: not in enabled drivers build config 00:02:45.167 mempool/dpaa: not in enabled drivers build config 00:02:45.167 mempool/dpaa2: not in enabled drivers build config 00:02:45.167 mempool/octeontx: not in enabled drivers build config 00:02:45.167 mempool/stack: not in enabled drivers build config 00:02:45.167 dma/cnxk: not in enabled drivers build config 00:02:45.167 dma/dpaa: not in enabled drivers build config 00:02:45.167 dma/dpaa2: not in enabled drivers build config 00:02:45.167 dma/hisilicon: not in enabled drivers build config 00:02:45.167 dma/idxd: not in enabled drivers build config 00:02:45.167 dma/ioat: not in enabled drivers build config 00:02:45.167 dma/skeleton: not in enabled drivers build config 00:02:45.167 net/af_packet: not in enabled drivers build config 00:02:45.167 net/af_xdp: not in enabled drivers build config 00:02:45.167 net/ark: not in enabled drivers build config 00:02:45.167 net/atlantic: not in enabled drivers build config 00:02:45.167 net/avp: not in enabled drivers build config 00:02:45.167 net/axgbe: not in enabled drivers build config 00:02:45.167 net/bnx2x: not in enabled drivers build config 00:02:45.167 net/bnxt: not in enabled drivers build config 00:02:45.167 net/bonding: not in enabled drivers build config 00:02:45.167 net/cnxk: not in enabled drivers build config 00:02:45.167 net/cpfl: not in enabled drivers build config 00:02:45.167 net/cxgbe: not in enabled drivers build config 00:02:45.167 net/dpaa: not in enabled drivers build config 00:02:45.167 net/dpaa2: not in enabled drivers build config 00:02:45.167 net/e1000: not in enabled drivers build config 00:02:45.167 net/ena: not in enabled drivers build config 00:02:45.167 net/enetc: not in enabled drivers build config 00:02:45.167 net/enetfec: not in enabled drivers build config 00:02:45.167 net/enic: not in enabled drivers build config 00:02:45.167 net/failsafe: not in enabled drivers build config 00:02:45.167 net/fm10k: not in enabled drivers build config 00:02:45.167 net/gve: not in enabled drivers build config 00:02:45.167 net/hinic: not in enabled drivers build config 00:02:45.167 net/hns3: not in enabled drivers build config 00:02:45.167 net/i40e: not in enabled drivers build config 00:02:45.167 net/iavf: not in enabled drivers build config 00:02:45.167 net/ice: not in enabled drivers build config 00:02:45.167 net/idpf: not in enabled drivers build config 00:02:45.167 net/igc: not in enabled drivers build config 00:02:45.167 net/ionic: not in enabled drivers build config 00:02:45.167 net/ipn3ke: not in enabled drivers build config 00:02:45.167 net/ixgbe: not in enabled drivers build config 00:02:45.167 net/mana: not in enabled drivers build config 00:02:45.167 net/memif: not in enabled drivers build config 00:02:45.167 net/mlx4: not in enabled drivers build config 00:02:45.167 net/mlx5: not in enabled drivers build config 00:02:45.167 net/mvneta: not in enabled drivers build config 00:02:45.167 net/mvpp2: not in enabled drivers build config 00:02:45.167 net/netvsc: not in enabled drivers build config 00:02:45.167 net/nfb: not in enabled drivers build config 00:02:45.167 net/nfp: not in enabled drivers build config 00:02:45.167 net/ngbe: not in enabled drivers build config 00:02:45.167 net/null: not in enabled drivers build config 00:02:45.167 net/octeontx: not in enabled drivers build config 00:02:45.167 net/octeon_ep: not in enabled drivers build config 00:02:45.167 net/pcap: not in enabled drivers build config 00:02:45.167 net/pfe: not in enabled drivers build config 00:02:45.167 net/qede: not in enabled drivers build config 00:02:45.167 net/ring: not in enabled drivers build config 00:02:45.167 net/sfc: not in enabled drivers build config 00:02:45.167 net/softnic: not in enabled drivers build config 00:02:45.167 net/tap: not in enabled drivers build config 00:02:45.167 net/thunderx: not in enabled drivers build config 00:02:45.167 net/txgbe: not in enabled drivers build config 00:02:45.167 net/vdev_netvsc: not in enabled drivers build config 00:02:45.167 net/vhost: not in enabled drivers build config 00:02:45.167 net/virtio: not in enabled drivers build config 00:02:45.167 net/vmxnet3: not in enabled drivers build config 00:02:45.167 raw/*: missing internal dependency, "rawdev" 00:02:45.167 crypto/armv8: not in enabled drivers build config 00:02:45.167 crypto/bcmfs: not in enabled drivers build config 00:02:45.167 crypto/caam_jr: not in enabled drivers build config 00:02:45.167 crypto/ccp: not in enabled drivers build config 00:02:45.167 crypto/cnxk: not in enabled drivers build config 00:02:45.167 crypto/dpaa_sec: not in enabled drivers build config 00:02:45.167 crypto/dpaa2_sec: not in enabled drivers build config 00:02:45.167 crypto/ipsec_mb: not in enabled drivers build config 00:02:45.167 crypto/mlx5: not in enabled drivers build config 00:02:45.167 crypto/mvsam: not in enabled drivers build config 00:02:45.167 crypto/nitrox: not in enabled drivers build config 00:02:45.167 crypto/null: not in enabled drivers build config 00:02:45.167 crypto/octeontx: not in enabled drivers build config 00:02:45.167 crypto/openssl: not in enabled drivers build config 00:02:45.167 crypto/scheduler: not in enabled drivers build config 00:02:45.167 crypto/uadk: not in enabled drivers build config 00:02:45.167 crypto/virtio: not in enabled drivers build config 00:02:45.167 compress/isal: not in enabled drivers build config 00:02:45.167 compress/mlx5: not in enabled drivers build config 00:02:45.167 compress/octeontx: not in enabled drivers build config 00:02:45.167 compress/zlib: not in enabled drivers build config 00:02:45.167 regex/*: missing internal dependency, "regexdev" 00:02:45.167 ml/*: missing internal dependency, "mldev" 00:02:45.167 vdpa/ifc: not in enabled drivers build config 00:02:45.167 vdpa/mlx5: not in enabled drivers build config 00:02:45.167 vdpa/nfp: not in enabled drivers build config 00:02:45.167 vdpa/sfc: not in enabled drivers build config 00:02:45.167 event/*: missing internal dependency, "eventdev" 00:02:45.167 baseband/*: missing internal dependency, "bbdev" 00:02:45.167 gpu/*: missing internal dependency, "gpudev" 00:02:45.167 00:02:45.167 00:02:45.425 Build targets in project: 84 00:02:45.425 00:02:45.425 DPDK 23.11.0 00:02:45.425 00:02:45.425 User defined options 00:02:45.425 buildtype : debug 00:02:45.425 default_library : shared 00:02:45.425 libdir : lib 00:02:45.426 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:45.426 b_sanitize : address 00:02:45.426 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:45.426 c_link_args : 00:02:45.426 cpu_instruction_set: native 00:02:45.426 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:45.426 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:45.426 enable_docs : false 00:02:45.426 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:45.426 enable_kmods : false 00:02:45.426 tests : false 00:02:45.426 00:02:45.426 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:45.991 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:45.991 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:45.991 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:45.991 [3/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:45.991 [4/264] Linking static target lib/librte_kvargs.a 00:02:45.991 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:45.991 [6/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:45.991 [7/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:45.991 [8/264] Linking static target lib/librte_log.a 00:02:45.991 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:45.991 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:46.249 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:46.249 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:46.249 [13/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:46.249 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:46.506 [15/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:46.506 [16/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.506 [17/264] Linking static target lib/librte_telemetry.a 00:02:46.506 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:46.506 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:46.763 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:46.763 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:46.763 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:46.763 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:46.763 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:46.763 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:46.763 [26/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.763 [27/264] Linking target lib/librte_log.so.24.0 00:02:46.764 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:47.021 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:47.021 [30/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:47.021 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:47.021 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:47.021 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:47.021 [34/264] Linking target lib/librte_kvargs.so.24.0 00:02:47.021 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:47.021 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:47.021 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:47.021 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:47.021 [39/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.279 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:47.279 [41/264] Linking target lib/librte_telemetry.so.24.0 00:02:47.279 [42/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:47.279 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:47.279 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.279 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.279 [46/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:47.536 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:47.536 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:47.536 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:47.536 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:47.536 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:47.793 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:47.793 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:47.793 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:47.793 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:47.793 [56/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:47.793 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:47.793 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:47.793 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:47.793 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:47.793 [61/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:48.052 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:48.052 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:48.052 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:48.052 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:48.052 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:48.052 [67/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:48.364 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:48.364 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:48.364 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:48.364 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:48.364 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:48.364 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:48.364 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.364 [75/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:48.364 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.364 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.621 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:48.621 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:48.621 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:48.621 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:48.621 [82/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:48.621 [83/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:48.621 [84/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:48.877 [85/264] Linking static target lib/librte_eal.a 00:02:48.877 [86/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:48.877 [87/264] Linking static target lib/librte_ring.a 00:02:48.877 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:48.877 [89/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:48.877 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.134 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.134 [92/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.134 [93/264] Linking static target lib/librte_rcu.a 00:02:49.134 [94/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.134 [95/264] Linking static target lib/librte_mempool.a 00:02:49.134 [96/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:49.390 [97/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.390 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:49.390 [99/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.390 [100/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:49.390 [101/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:49.648 [102/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:49.648 [103/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:49.648 [104/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:49.648 [105/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:49.648 [106/264] Linking static target lib/librte_net.a 00:02:49.906 [107/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:49.906 [108/264] Linking static target lib/librte_mbuf.a 00:02:49.906 [109/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:49.906 [110/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:49.906 [111/264] Linking static target lib/librte_meter.a 00:02:50.164 [112/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.164 [113/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.164 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:50.164 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.164 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:50.422 [117/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.422 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:50.680 [119/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.680 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:50.938 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:50.938 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:50.938 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:50.938 [124/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:50.938 [125/264] Linking static target lib/librte_pci.a 00:02:50.938 [126/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:51.195 [127/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.195 [128/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:51.195 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:51.195 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:51.195 [131/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:51.196 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:51.196 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:51.196 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:51.196 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:51.196 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:51.196 [137/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.454 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:51.454 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:51.454 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:51.454 [141/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.454 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.712 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.712 [144/264] Linking static target lib/librte_cmdline.a 00:02:51.712 [145/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:51.712 [146/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:51.712 [147/264] Linking static target lib/librte_timer.a 00:02:52.019 [148/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:52.019 [149/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:52.019 [150/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:52.019 [151/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:52.315 [152/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:52.315 [153/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.315 [154/264] Linking static target lib/librte_compressdev.a 00:02:52.315 [155/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:52.315 [156/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.315 [157/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:52.315 [158/264] Linking static target lib/librte_ethdev.a 00:02:52.315 [159/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:52.574 [160/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.574 [161/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.574 [162/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:52.574 [163/264] Linking static target lib/librte_dmadev.a 00:02:52.574 [164/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.574 [165/264] Linking static target lib/librte_hash.a 00:02:52.832 [166/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:52.832 [167/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.832 [168/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.832 [169/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.832 [170/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.832 [171/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.090 [172/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.090 [173/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:53.090 [174/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:53.090 [175/264] Linking static target lib/librte_cryptodev.a 00:02:53.090 [176/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.090 [177/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.090 [178/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.090 [179/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.349 [180/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.349 [181/264] Linking static target lib/librte_power.a 00:02:53.349 [182/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.349 [183/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.607 [184/264] Linking static target lib/librte_reorder.a 00:02:53.607 [185/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:53.607 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.607 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:53.607 [188/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.607 [189/264] Linking static target lib/librte_security.a 00:02:53.864 [190/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.121 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.121 [192/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.121 [193/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:54.121 [194/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.121 [195/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:54.121 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:54.379 [197/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:54.379 [198/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:54.637 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:54.637 [200/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:54.637 [201/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:54.637 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:54.895 [203/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:54.895 [204/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:54.895 [205/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:54.895 [206/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:54.895 [207/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.895 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:54.895 [209/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:54.895 [210/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.895 [211/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.895 [212/264] Linking static target drivers/librte_bus_vdev.a 00:02:54.895 [213/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:54.895 [214/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.895 [215/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.895 [216/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:54.895 [217/264] Linking static target drivers/librte_bus_pci.a 00:02:55.152 [218/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:55.153 [219/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.153 [220/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.153 [221/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.153 [222/264] Linking static target drivers/librte_mempool_ring.a 00:02:55.411 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.345 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:56.602 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.602 [226/264] Linking target lib/librte_eal.so.24.0 00:02:56.602 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:56.860 [228/264] Linking target lib/librte_meter.so.24.0 00:02:56.860 [229/264] Linking target lib/librte_timer.so.24.0 00:02:56.860 [230/264] Linking target lib/librte_pci.so.24.0 00:02:56.860 [231/264] Linking target lib/librte_ring.so.24.0 00:02:56.860 [232/264] Linking target lib/librte_dmadev.so.24.0 00:02:56.860 [233/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:56.860 [234/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:56.860 [235/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:56.860 [236/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:56.860 [237/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:56.860 [238/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:56.860 [239/264] Linking target lib/librte_mempool.so.24.0 00:02:56.860 [240/264] Linking target lib/librte_rcu.so.24.0 00:02:56.860 [241/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:56.860 [242/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:56.860 [243/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:57.118 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:57.118 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:57.118 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:57.118 [247/264] Linking target lib/librte_net.so.24.0 00:02:57.118 [248/264] Linking target lib/librte_compressdev.so.24.0 00:02:57.118 [249/264] Linking target lib/librte_reorder.so.24.0 00:02:57.118 [250/264] Linking target lib/librte_cryptodev.so.24.0 00:02:57.376 [251/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:57.376 [252/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:57.376 [253/264] Linking target lib/librte_hash.so.24.0 00:02:57.376 [254/264] Linking target lib/librte_cmdline.so.24.0 00:02:57.376 [255/264] Linking target lib/librte_security.so.24.0 00:02:57.376 [256/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:57.634 [257/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.634 [258/264] Linking target lib/librte_ethdev.so.24.0 00:02:57.892 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:57.892 [260/264] Linking target lib/librte_power.so.24.0 00:02:58.149 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:58.149 [262/264] Linking static target lib/librte_vhost.a 00:03:00.047 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.047 [264/264] Linking target lib/librte_vhost.so.24.0 00:03:00.047 INFO: autodetecting backend as ninja 00:03:00.047 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:00.612 CC lib/ut/ut.o 00:03:00.612 CC lib/log/log.o 00:03:00.612 CC lib/ut_mock/mock.o 00:03:00.612 CC lib/log/log_deprecated.o 00:03:00.612 CC lib/log/log_flags.o 00:03:00.612 LIB libspdk_ut_mock.a 00:03:00.934 LIB libspdk_log.a 00:03:00.934 SO libspdk_ut_mock.so.5.0 00:03:00.934 LIB libspdk_ut.a 00:03:00.934 SO libspdk_log.so.6.1 00:03:00.934 SO libspdk_ut.so.1.0 00:03:00.934 SYMLINK libspdk_ut_mock.so 00:03:00.934 SYMLINK libspdk_ut.so 00:03:00.935 SYMLINK libspdk_log.so 00:03:00.935 CXX lib/trace_parser/trace.o 00:03:00.935 CC lib/dma/dma.o 00:03:00.935 CC lib/ioat/ioat.o 00:03:00.935 CC lib/util/base64.o 00:03:00.935 CC lib/util/bit_array.o 00:03:00.935 CC lib/util/cpuset.o 00:03:00.935 CC lib/util/crc16.o 00:03:00.935 CC lib/util/crc32.o 00:03:00.935 CC lib/util/crc32c.o 00:03:00.935 CC lib/vfio_user/host/vfio_user_pci.o 00:03:00.935 CC lib/vfio_user/host/vfio_user.o 00:03:00.935 CC lib/util/crc32_ieee.o 00:03:01.210 CC lib/util/crc64.o 00:03:01.210 CC lib/util/dif.o 00:03:01.210 LIB libspdk_dma.a 00:03:01.210 SO libspdk_dma.so.3.0 00:03:01.210 CC lib/util/fd.o 00:03:01.210 CC lib/util/file.o 00:03:01.210 SYMLINK libspdk_dma.so 00:03:01.210 CC lib/util/hexlify.o 00:03:01.210 CC lib/util/iov.o 00:03:01.210 CC lib/util/math.o 00:03:01.210 LIB libspdk_ioat.a 00:03:01.210 CC lib/util/pipe.o 00:03:01.210 SO libspdk_ioat.so.6.0 00:03:01.210 LIB libspdk_vfio_user.a 00:03:01.210 CC lib/util/strerror_tls.o 00:03:01.210 CC lib/util/string.o 00:03:01.210 SO libspdk_vfio_user.so.4.0 00:03:01.210 SYMLINK libspdk_ioat.so 00:03:01.210 CC lib/util/uuid.o 00:03:01.210 CC lib/util/fd_group.o 00:03:01.210 CC lib/util/xor.o 00:03:01.211 SYMLINK libspdk_vfio_user.so 00:03:01.211 CC lib/util/zipf.o 00:03:01.778 LIB libspdk_util.a 00:03:01.778 SO libspdk_util.so.8.0 00:03:01.778 LIB libspdk_trace_parser.a 00:03:01.778 SO libspdk_trace_parser.so.4.0 00:03:01.778 SYMLINK libspdk_util.so 00:03:01.778 SYMLINK libspdk_trace_parser.so 00:03:02.035 CC lib/json/json_parse.o 00:03:02.035 CC lib/json/json_util.o 00:03:02.035 CC lib/json/json_write.o 00:03:02.035 CC lib/conf/conf.o 00:03:02.035 CC lib/vmd/vmd.o 00:03:02.035 CC lib/env_dpdk/env.o 00:03:02.035 CC lib/vmd/led.o 00:03:02.035 CC lib/rdma/common.o 00:03:02.035 CC lib/env_dpdk/memory.o 00:03:02.035 CC lib/idxd/idxd.o 00:03:02.035 CC lib/rdma/rdma_verbs.o 00:03:02.035 LIB libspdk_conf.a 00:03:02.035 CC lib/idxd/idxd_user.o 00:03:02.035 CC lib/idxd/idxd_kernel.o 00:03:02.293 SO libspdk_conf.so.5.0 00:03:02.293 SYMLINK libspdk_conf.so 00:03:02.293 LIB libspdk_json.a 00:03:02.293 CC lib/env_dpdk/pci.o 00:03:02.293 CC lib/env_dpdk/init.o 00:03:02.293 SO libspdk_json.so.5.1 00:03:02.293 LIB libspdk_rdma.a 00:03:02.293 SO libspdk_rdma.so.5.0 00:03:02.293 CC lib/env_dpdk/threads.o 00:03:02.293 SYMLINK libspdk_json.so 00:03:02.293 CC lib/env_dpdk/pci_ioat.o 00:03:02.293 SYMLINK libspdk_rdma.so 00:03:02.293 CC lib/env_dpdk/pci_virtio.o 00:03:02.293 CC lib/jsonrpc/jsonrpc_server.o 00:03:02.293 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:02.551 CC lib/env_dpdk/pci_vmd.o 00:03:02.551 CC lib/env_dpdk/pci_idxd.o 00:03:02.551 CC lib/env_dpdk/pci_event.o 00:03:02.551 LIB libspdk_idxd.a 00:03:02.551 SO libspdk_idxd.so.11.0 00:03:02.551 CC lib/jsonrpc/jsonrpc_client.o 00:03:02.551 CC lib/env_dpdk/sigbus_handler.o 00:03:02.551 LIB libspdk_vmd.a 00:03:02.551 CC lib/env_dpdk/pci_dpdk.o 00:03:02.551 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:02.551 SYMLINK libspdk_idxd.so 00:03:02.551 SO libspdk_vmd.so.5.0 00:03:02.551 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:02.551 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:02.551 SYMLINK libspdk_vmd.so 00:03:02.809 LIB libspdk_jsonrpc.a 00:03:02.809 SO libspdk_jsonrpc.so.5.1 00:03:02.809 SYMLINK libspdk_jsonrpc.so 00:03:03.066 CC lib/rpc/rpc.o 00:03:03.324 LIB libspdk_rpc.a 00:03:03.324 SO libspdk_rpc.so.5.0 00:03:03.324 SYMLINK libspdk_rpc.so 00:03:03.324 CC lib/sock/sock.o 00:03:03.324 CC lib/sock/sock_rpc.o 00:03:03.324 CC lib/notify/notify.o 00:03:03.324 CC lib/notify/notify_rpc.o 00:03:03.324 CC lib/trace/trace.o 00:03:03.324 CC lib/trace/trace_flags.o 00:03:03.324 CC lib/trace/trace_rpc.o 00:03:03.585 LIB libspdk_env_dpdk.a 00:03:03.585 LIB libspdk_notify.a 00:03:03.585 SO libspdk_env_dpdk.so.13.0 00:03:03.585 SO libspdk_notify.so.5.0 00:03:03.585 SYMLINK libspdk_notify.so 00:03:03.585 LIB libspdk_trace.a 00:03:03.585 SYMLINK libspdk_env_dpdk.so 00:03:03.585 SO libspdk_trace.so.9.0 00:03:03.585 SYMLINK libspdk_trace.so 00:03:03.844 LIB libspdk_sock.a 00:03:03.844 SO libspdk_sock.so.8.0 00:03:03.844 CC lib/thread/iobuf.o 00:03:03.844 CC lib/thread/thread.o 00:03:03.844 SYMLINK libspdk_sock.so 00:03:04.101 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:04.101 CC lib/nvme/nvme_fabric.o 00:03:04.101 CC lib/nvme/nvme_ns.o 00:03:04.101 CC lib/nvme/nvme_ns_cmd.o 00:03:04.101 CC lib/nvme/nvme_ctrlr.o 00:03:04.101 CC lib/nvme/nvme_qpair.o 00:03:04.101 CC lib/nvme/nvme_pcie_common.o 00:03:04.101 CC lib/nvme/nvme_pcie.o 00:03:04.101 CC lib/nvme/nvme.o 00:03:04.664 CC lib/nvme/nvme_quirks.o 00:03:04.664 CC lib/nvme/nvme_transport.o 00:03:04.664 CC lib/nvme/nvme_discovery.o 00:03:04.664 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:04.664 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:04.922 CC lib/nvme/nvme_tcp.o 00:03:04.922 CC lib/nvme/nvme_opal.o 00:03:04.922 CC lib/nvme/nvme_io_msg.o 00:03:04.922 CC lib/nvme/nvme_poll_group.o 00:03:05.180 CC lib/nvme/nvme_zns.o 00:03:05.180 CC lib/nvme/nvme_cuse.o 00:03:05.180 CC lib/nvme/nvme_vfio_user.o 00:03:05.180 CC lib/nvme/nvme_rdma.o 00:03:05.438 LIB libspdk_thread.a 00:03:05.438 SO libspdk_thread.so.9.0 00:03:05.438 SYMLINK libspdk_thread.so 00:03:05.438 CC lib/init/json_config.o 00:03:05.438 CC lib/accel/accel.o 00:03:05.438 CC lib/blob/blobstore.o 00:03:05.695 CC lib/virtio/virtio.o 00:03:05.695 CC lib/virtio/virtio_vhost_user.o 00:03:05.695 CC lib/init/subsystem.o 00:03:05.695 CC lib/virtio/virtio_vfio_user.o 00:03:05.695 CC lib/blob/request.o 00:03:05.695 CC lib/blob/zeroes.o 00:03:05.954 CC lib/init/subsystem_rpc.o 00:03:05.954 CC lib/init/rpc.o 00:03:05.954 CC lib/virtio/virtio_pci.o 00:03:05.954 CC lib/accel/accel_rpc.o 00:03:05.954 CC lib/blob/blob_bs_dev.o 00:03:05.954 CC lib/accel/accel_sw.o 00:03:05.954 LIB libspdk_init.a 00:03:05.954 SO libspdk_init.so.4.0 00:03:05.954 SYMLINK libspdk_init.so 00:03:06.214 CC lib/event/app.o 00:03:06.214 CC lib/event/log_rpc.o 00:03:06.214 CC lib/event/reactor.o 00:03:06.214 CC lib/event/app_rpc.o 00:03:06.214 CC lib/event/scheduler_static.o 00:03:06.214 LIB libspdk_virtio.a 00:03:06.214 SO libspdk_virtio.so.6.0 00:03:06.214 SYMLINK libspdk_virtio.so 00:03:06.474 LIB libspdk_event.a 00:03:06.474 LIB libspdk_accel.a 00:03:06.474 LIB libspdk_nvme.a 00:03:06.474 SO libspdk_event.so.12.0 00:03:06.474 SO libspdk_accel.so.14.0 00:03:06.736 SYMLINK libspdk_event.so 00:03:06.736 SYMLINK libspdk_accel.so 00:03:06.736 SO libspdk_nvme.so.12.0 00:03:06.736 CC lib/bdev/bdev.o 00:03:06.736 CC lib/bdev/bdev_rpc.o 00:03:06.736 CC lib/bdev/part.o 00:03:06.736 CC lib/bdev/scsi_nvme.o 00:03:06.736 CC lib/bdev/bdev_zone.o 00:03:06.997 SYMLINK libspdk_nvme.so 00:03:08.382 LIB libspdk_blob.a 00:03:08.382 SO libspdk_blob.so.10.1 00:03:08.642 SYMLINK libspdk_blob.so 00:03:08.642 CC lib/lvol/lvol.o 00:03:08.642 CC lib/blobfs/blobfs.o 00:03:08.642 CC lib/blobfs/tree.o 00:03:09.580 LIB libspdk_bdev.a 00:03:09.580 SO libspdk_bdev.so.14.0 00:03:09.580 LIB libspdk_blobfs.a 00:03:09.580 SO libspdk_blobfs.so.9.0 00:03:09.580 LIB libspdk_lvol.a 00:03:09.580 SYMLINK libspdk_bdev.so 00:03:09.580 SO libspdk_lvol.so.9.1 00:03:09.580 SYMLINK libspdk_blobfs.so 00:03:09.580 SYMLINK libspdk_lvol.so 00:03:09.580 CC lib/nbd/nbd_rpc.o 00:03:09.580 CC lib/nbd/nbd.o 00:03:09.580 CC lib/ftl/ftl_core.o 00:03:09.580 CC lib/ftl/ftl_init.o 00:03:09.580 CC lib/ftl/ftl_layout.o 00:03:09.580 CC lib/ftl/ftl_debug.o 00:03:09.580 CC lib/scsi/dev.o 00:03:09.580 CC lib/ftl/ftl_io.o 00:03:09.580 CC lib/ublk/ublk.o 00:03:09.580 CC lib/nvmf/ctrlr.o 00:03:09.838 CC lib/ublk/ublk_rpc.o 00:03:09.838 CC lib/scsi/lun.o 00:03:09.838 CC lib/scsi/port.o 00:03:09.838 CC lib/scsi/scsi.o 00:03:09.838 CC lib/scsi/scsi_bdev.o 00:03:09.838 CC lib/scsi/scsi_pr.o 00:03:10.097 CC lib/nvmf/ctrlr_discovery.o 00:03:10.097 CC lib/scsi/scsi_rpc.o 00:03:10.097 CC lib/ftl/ftl_sb.o 00:03:10.097 CC lib/ftl/ftl_l2p.o 00:03:10.097 LIB libspdk_nbd.a 00:03:10.097 SO libspdk_nbd.so.6.0 00:03:10.097 CC lib/scsi/task.o 00:03:10.097 CC lib/nvmf/ctrlr_bdev.o 00:03:10.097 SYMLINK libspdk_nbd.so 00:03:10.097 CC lib/nvmf/subsystem.o 00:03:10.097 CC lib/ftl/ftl_l2p_flat.o 00:03:10.097 CC lib/ftl/ftl_nv_cache.o 00:03:10.356 CC lib/ftl/ftl_band.o 00:03:10.356 CC lib/nvmf/nvmf.o 00:03:10.356 LIB libspdk_ublk.a 00:03:10.356 SO libspdk_ublk.so.2.0 00:03:10.356 LIB libspdk_scsi.a 00:03:10.356 SYMLINK libspdk_ublk.so 00:03:10.356 CC lib/nvmf/nvmf_rpc.o 00:03:10.356 SO libspdk_scsi.so.8.0 00:03:10.356 CC lib/nvmf/transport.o 00:03:10.356 CC lib/ftl/ftl_band_ops.o 00:03:10.356 SYMLINK libspdk_scsi.so 00:03:10.356 CC lib/ftl/ftl_writer.o 00:03:10.614 CC lib/iscsi/conn.o 00:03:10.872 CC lib/iscsi/init_grp.o 00:03:10.872 CC lib/vhost/vhost.o 00:03:10.872 CC lib/vhost/vhost_rpc.o 00:03:10.872 CC lib/nvmf/tcp.o 00:03:10.872 CC lib/nvmf/rdma.o 00:03:10.872 CC lib/iscsi/iscsi.o 00:03:11.131 CC lib/iscsi/md5.o 00:03:11.131 CC lib/ftl/ftl_rq.o 00:03:11.131 CC lib/ftl/ftl_reloc.o 00:03:11.131 CC lib/ftl/ftl_l2p_cache.o 00:03:11.131 CC lib/ftl/ftl_p2l.o 00:03:11.389 CC lib/ftl/mngt/ftl_mngt.o 00:03:11.389 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:11.389 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:11.389 CC lib/vhost/vhost_scsi.o 00:03:11.389 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:11.389 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:11.389 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:11.648 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:11.648 CC lib/vhost/vhost_blk.o 00:03:11.648 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:11.648 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:11.648 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:11.648 CC lib/iscsi/param.o 00:03:11.648 CC lib/vhost/rte_vhost_user.o 00:03:11.906 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:11.906 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:11.906 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:11.906 CC lib/ftl/utils/ftl_conf.o 00:03:12.164 CC lib/iscsi/portal_grp.o 00:03:12.164 CC lib/ftl/utils/ftl_md.o 00:03:12.164 CC lib/iscsi/tgt_node.o 00:03:12.164 CC lib/ftl/utils/ftl_mempool.o 00:03:12.164 CC lib/ftl/utils/ftl_bitmap.o 00:03:12.164 CC lib/ftl/utils/ftl_property.o 00:03:12.164 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:12.164 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:12.164 CC lib/iscsi/iscsi_subsystem.o 00:03:12.164 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:12.164 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:12.422 CC lib/iscsi/iscsi_rpc.o 00:03:12.422 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:12.422 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:12.422 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:12.422 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:12.422 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:12.422 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:12.422 CC lib/ftl/base/ftl_base_dev.o 00:03:12.422 CC lib/ftl/base/ftl_base_bdev.o 00:03:12.422 CC lib/ftl/ftl_trace.o 00:03:12.680 CC lib/iscsi/task.o 00:03:12.680 LIB libspdk_iscsi.a 00:03:12.680 LIB libspdk_vhost.a 00:03:12.680 LIB libspdk_ftl.a 00:03:12.680 SO libspdk_iscsi.so.7.0 00:03:12.680 SO libspdk_vhost.so.7.1 00:03:12.939 SYMLINK libspdk_vhost.so 00:03:12.939 SO libspdk_ftl.so.8.0 00:03:12.939 SYMLINK libspdk_iscsi.so 00:03:12.939 LIB libspdk_nvmf.a 00:03:12.939 SO libspdk_nvmf.so.17.0 00:03:12.939 SYMLINK libspdk_ftl.so 00:03:13.197 SYMLINK libspdk_nvmf.so 00:03:13.456 CC module/env_dpdk/env_dpdk_rpc.o 00:03:13.456 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:13.456 CC module/scheduler/gscheduler/gscheduler.o 00:03:13.456 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:13.456 CC module/accel/error/accel_error.o 00:03:13.456 CC module/accel/iaa/accel_iaa.o 00:03:13.456 CC module/blob/bdev/blob_bdev.o 00:03:13.456 CC module/accel/dsa/accel_dsa.o 00:03:13.456 CC module/sock/posix/posix.o 00:03:13.456 CC module/accel/ioat/accel_ioat.o 00:03:13.456 LIB libspdk_env_dpdk_rpc.a 00:03:13.456 SO libspdk_env_dpdk_rpc.so.5.0 00:03:13.456 SYMLINK libspdk_env_dpdk_rpc.so 00:03:13.456 CC module/accel/dsa/accel_dsa_rpc.o 00:03:13.456 LIB libspdk_scheduler_dpdk_governor.a 00:03:13.456 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:13.456 CC module/accel/error/accel_error_rpc.o 00:03:13.456 LIB libspdk_scheduler_gscheduler.a 00:03:13.456 SO libspdk_scheduler_gscheduler.so.3.0 00:03:13.456 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:13.456 LIB libspdk_scheduler_dynamic.a 00:03:13.456 CC module/accel/ioat/accel_ioat_rpc.o 00:03:13.456 CC module/accel/iaa/accel_iaa_rpc.o 00:03:13.456 SYMLINK libspdk_scheduler_gscheduler.so 00:03:13.456 SO libspdk_scheduler_dynamic.so.3.0 00:03:13.715 LIB libspdk_blob_bdev.a 00:03:13.715 SO libspdk_blob_bdev.so.10.1 00:03:13.715 SYMLINK libspdk_scheduler_dynamic.so 00:03:13.715 LIB libspdk_accel_dsa.a 00:03:13.715 SYMLINK libspdk_blob_bdev.so 00:03:13.715 SO libspdk_accel_dsa.so.4.0 00:03:13.715 LIB libspdk_accel_error.a 00:03:13.715 LIB libspdk_accel_iaa.a 00:03:13.715 LIB libspdk_accel_ioat.a 00:03:13.715 SO libspdk_accel_error.so.1.0 00:03:13.715 SYMLINK libspdk_accel_dsa.so 00:03:13.715 SO libspdk_accel_iaa.so.2.0 00:03:13.715 SO libspdk_accel_ioat.so.5.0 00:03:13.715 SYMLINK libspdk_accel_error.so 00:03:13.715 SYMLINK libspdk_accel_iaa.so 00:03:13.715 SYMLINK libspdk_accel_ioat.so 00:03:13.715 CC module/bdev/delay/vbdev_delay.o 00:03:13.715 CC module/bdev/lvol/vbdev_lvol.o 00:03:13.715 CC module/bdev/gpt/gpt.o 00:03:13.715 CC module/blobfs/bdev/blobfs_bdev.o 00:03:13.715 CC module/bdev/error/vbdev_error.o 00:03:13.715 CC module/bdev/malloc/bdev_malloc.o 00:03:13.715 CC module/bdev/null/bdev_null.o 00:03:13.972 CC module/bdev/passthru/vbdev_passthru.o 00:03:13.972 CC module/bdev/nvme/bdev_nvme.o 00:03:13.972 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:13.972 CC module/bdev/gpt/vbdev_gpt.o 00:03:13.972 CC module/bdev/error/vbdev_error_rpc.o 00:03:13.973 CC module/bdev/null/bdev_null_rpc.o 00:03:13.973 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:13.973 LIB libspdk_blobfs_bdev.a 00:03:14.286 LIB libspdk_bdev_error.a 00:03:14.286 LIB libspdk_sock_posix.a 00:03:14.286 SO libspdk_blobfs_bdev.so.5.0 00:03:14.286 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:14.286 SO libspdk_bdev_error.so.5.0 00:03:14.286 SO libspdk_sock_posix.so.5.0 00:03:14.286 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:14.286 LIB libspdk_bdev_null.a 00:03:14.286 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:14.286 SYMLINK libspdk_blobfs_bdev.so 00:03:14.286 LIB libspdk_bdev_gpt.a 00:03:14.286 SO libspdk_bdev_null.so.5.0 00:03:14.286 LIB libspdk_bdev_delay.a 00:03:14.286 SYMLINK libspdk_bdev_error.so 00:03:14.286 SO libspdk_bdev_gpt.so.5.0 00:03:14.286 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:14.286 CC module/bdev/nvme/nvme_rpc.o 00:03:14.286 SO libspdk_bdev_delay.so.5.0 00:03:14.286 SYMLINK libspdk_sock_posix.so 00:03:14.286 SYMLINK libspdk_bdev_null.so 00:03:14.286 SYMLINK libspdk_bdev_gpt.so 00:03:14.286 SYMLINK libspdk_bdev_delay.so 00:03:14.286 LIB libspdk_bdev_passthru.a 00:03:14.286 SO libspdk_bdev_passthru.so.5.0 00:03:14.286 LIB libspdk_bdev_malloc.a 00:03:14.286 CC module/bdev/split/vbdev_split.o 00:03:14.286 SO libspdk_bdev_malloc.so.5.0 00:03:14.286 CC module/bdev/raid/bdev_raid.o 00:03:14.286 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:14.286 CC module/bdev/xnvme/bdev_xnvme.o 00:03:14.286 SYMLINK libspdk_bdev_passthru.so 00:03:14.286 SYMLINK libspdk_bdev_malloc.so 00:03:14.286 CC module/bdev/split/vbdev_split_rpc.o 00:03:14.559 CC module/bdev/aio/bdev_aio.o 00:03:14.559 CC module/bdev/nvme/bdev_mdns_client.o 00:03:14.559 LIB libspdk_bdev_lvol.a 00:03:14.559 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:14.559 SO libspdk_bdev_lvol.so.5.0 00:03:14.559 LIB libspdk_bdev_split.a 00:03:14.559 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:14.559 SO libspdk_bdev_split.so.5.0 00:03:14.559 SYMLINK libspdk_bdev_lvol.so 00:03:14.559 CC module/bdev/aio/bdev_aio_rpc.o 00:03:14.559 LIB libspdk_bdev_zone_block.a 00:03:14.559 SYMLINK libspdk_bdev_split.so 00:03:14.559 CC module/bdev/nvme/vbdev_opal.o 00:03:14.559 SO libspdk_bdev_zone_block.so.5.0 00:03:14.559 LIB libspdk_bdev_xnvme.a 00:03:14.559 CC module/bdev/ftl/bdev_ftl.o 00:03:14.817 SO libspdk_bdev_xnvme.so.2.0 00:03:14.817 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:14.817 SYMLINK libspdk_bdev_zone_block.so 00:03:14.817 LIB libspdk_bdev_aio.a 00:03:14.817 CC module/bdev/iscsi/bdev_iscsi.o 00:03:14.817 SO libspdk_bdev_aio.so.5.0 00:03:14.817 SYMLINK libspdk_bdev_xnvme.so 00:03:14.817 CC module/bdev/raid/bdev_raid_rpc.o 00:03:14.817 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:14.817 SYMLINK libspdk_bdev_aio.so 00:03:14.817 CC module/bdev/raid/bdev_raid_sb.o 00:03:14.817 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:14.817 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:14.817 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:14.817 LIB libspdk_bdev_ftl.a 00:03:15.076 SO libspdk_bdev_ftl.so.5.0 00:03:15.076 CC module/bdev/raid/raid0.o 00:03:15.076 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:15.076 CC module/bdev/raid/raid1.o 00:03:15.076 CC module/bdev/raid/concat.o 00:03:15.076 SYMLINK libspdk_bdev_ftl.so 00:03:15.076 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:15.076 LIB libspdk_bdev_iscsi.a 00:03:15.076 SO libspdk_bdev_iscsi.so.5.0 00:03:15.076 SYMLINK libspdk_bdev_iscsi.so 00:03:15.076 LIB libspdk_bdev_raid.a 00:03:15.336 SO libspdk_bdev_raid.so.5.0 00:03:15.336 SYMLINK libspdk_bdev_raid.so 00:03:15.336 LIB libspdk_bdev_virtio.a 00:03:15.336 SO libspdk_bdev_virtio.so.5.0 00:03:15.336 SYMLINK libspdk_bdev_virtio.so 00:03:16.282 LIB libspdk_bdev_nvme.a 00:03:16.282 SO libspdk_bdev_nvme.so.6.0 00:03:16.282 SYMLINK libspdk_bdev_nvme.so 00:03:16.543 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:16.543 CC module/event/subsystems/scheduler/scheduler.o 00:03:16.543 CC module/event/subsystems/iobuf/iobuf.o 00:03:16.543 CC module/event/subsystems/sock/sock.o 00:03:16.543 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:16.543 CC module/event/subsystems/vmd/vmd.o 00:03:16.543 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:16.804 LIB libspdk_event_sock.a 00:03:16.804 LIB libspdk_event_vhost_blk.a 00:03:16.804 LIB libspdk_event_scheduler.a 00:03:16.804 LIB libspdk_event_vmd.a 00:03:16.804 SO libspdk_event_sock.so.4.0 00:03:16.804 SO libspdk_event_vhost_blk.so.2.0 00:03:16.804 LIB libspdk_event_iobuf.a 00:03:16.804 SO libspdk_event_scheduler.so.3.0 00:03:16.804 SO libspdk_event_vmd.so.5.0 00:03:16.804 SO libspdk_event_iobuf.so.2.0 00:03:16.804 SYMLINK libspdk_event_sock.so 00:03:16.804 SYMLINK libspdk_event_vhost_blk.so 00:03:16.804 SYMLINK libspdk_event_scheduler.so 00:03:16.804 SYMLINK libspdk_event_vmd.so 00:03:16.804 SYMLINK libspdk_event_iobuf.so 00:03:17.066 CC module/event/subsystems/accel/accel.o 00:03:17.066 LIB libspdk_event_accel.a 00:03:17.066 SO libspdk_event_accel.so.5.0 00:03:17.066 SYMLINK libspdk_event_accel.so 00:03:17.326 CC module/event/subsystems/bdev/bdev.o 00:03:17.587 LIB libspdk_event_bdev.a 00:03:17.587 SO libspdk_event_bdev.so.5.0 00:03:17.587 SYMLINK libspdk_event_bdev.so 00:03:17.847 CC module/event/subsystems/scsi/scsi.o 00:03:17.847 CC module/event/subsystems/ublk/ublk.o 00:03:17.847 CC module/event/subsystems/nbd/nbd.o 00:03:17.847 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:17.847 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:17.847 LIB libspdk_event_ublk.a 00:03:17.847 LIB libspdk_event_nbd.a 00:03:17.847 LIB libspdk_event_scsi.a 00:03:17.847 SO libspdk_event_nbd.so.5.0 00:03:17.847 SO libspdk_event_ublk.so.2.0 00:03:17.847 SO libspdk_event_scsi.so.5.0 00:03:17.847 SYMLINK libspdk_event_ublk.so 00:03:17.847 LIB libspdk_event_nvmf.a 00:03:17.847 SYMLINK libspdk_event_nbd.so 00:03:17.847 SYMLINK libspdk_event_scsi.so 00:03:17.847 SO libspdk_event_nvmf.so.5.0 00:03:18.108 SYMLINK libspdk_event_nvmf.so 00:03:18.108 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:18.108 CC module/event/subsystems/iscsi/iscsi.o 00:03:18.108 LIB libspdk_event_vhost_scsi.a 00:03:18.108 SO libspdk_event_vhost_scsi.so.2.0 00:03:18.108 LIB libspdk_event_iscsi.a 00:03:18.108 SO libspdk_event_iscsi.so.5.0 00:03:18.368 SYMLINK libspdk_event_vhost_scsi.so 00:03:18.369 SYMLINK libspdk_event_iscsi.so 00:03:18.369 SO libspdk.so.5.0 00:03:18.369 SYMLINK libspdk.so 00:03:18.369 CXX app/trace/trace.o 00:03:18.369 CC app/spdk_nvme_perf/perf.o 00:03:18.369 CC app/spdk_lspci/spdk_lspci.o 00:03:18.369 CC app/trace_record/trace_record.o 00:03:18.629 CC app/nvmf_tgt/nvmf_main.o 00:03:18.629 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.629 CC app/spdk_tgt/spdk_tgt.o 00:03:18.629 CC examples/accel/perf/accel_perf.o 00:03:18.629 CC test/app/bdev_svc/bdev_svc.o 00:03:18.629 CC test/accel/dif/dif.o 00:03:18.629 LINK spdk_lspci 00:03:18.629 LINK nvmf_tgt 00:03:18.629 LINK spdk_tgt 00:03:18.629 LINK bdev_svc 00:03:18.629 LINK spdk_trace_record 00:03:18.629 LINK iscsi_tgt 00:03:18.629 CC app/spdk_nvme_identify/identify.o 00:03:18.889 LINK spdk_trace 00:03:18.889 CC test/app/histogram_perf/histogram_perf.o 00:03:18.889 CC examples/bdev/hello_world/hello_bdev.o 00:03:18.889 CC test/app/jsoncat/jsoncat.o 00:03:18.889 CC examples/blob/hello_world/hello_blob.o 00:03:18.889 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:18.889 CC test/app/stub/stub.o 00:03:18.889 LINK dif 00:03:18.889 LINK accel_perf 00:03:19.147 LINK jsoncat 00:03:19.148 LINK histogram_perf 00:03:19.148 LINK hello_bdev 00:03:19.148 LINK stub 00:03:19.148 LINK hello_blob 00:03:19.148 CC app/spdk_nvme_discover/discovery_aer.o 00:03:19.148 CC app/spdk_top/spdk_top.o 00:03:19.148 CC examples/blob/cli/blobcli.o 00:03:19.148 LINK nvme_fuzz 00:03:19.148 LINK spdk_nvme_perf 00:03:19.407 CC test/bdev/bdevio/bdevio.o 00:03:19.407 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:19.407 LINK spdk_nvme_discover 00:03:19.407 CC app/vhost/vhost.o 00:03:19.407 CC examples/bdev/bdevperf/bdevperf.o 00:03:19.407 CC app/spdk_dd/spdk_dd.o 00:03:19.407 CC app/fio/nvme/fio_plugin.o 00:03:19.407 LINK spdk_nvme_identify 00:03:19.407 LINK vhost 00:03:19.666 CC app/fio/bdev/fio_plugin.o 00:03:19.666 LINK bdevio 00:03:19.666 LINK blobcli 00:03:19.666 CC examples/ioat/perf/perf.o 00:03:19.666 CC examples/nvme/hello_world/hello_world.o 00:03:19.924 LINK spdk_dd 00:03:19.924 LINK ioat_perf 00:03:19.924 CC examples/nvme/reconnect/reconnect.o 00:03:19.924 LINK spdk_top 00:03:19.924 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:19.924 LINK hello_world 00:03:19.924 CC examples/ioat/verify/verify.o 00:03:19.924 CC examples/nvme/arbitration/arbitration.o 00:03:19.924 LINK spdk_nvme 00:03:19.924 LINK spdk_bdev 00:03:19.924 CC examples/nvme/hotplug/hotplug.o 00:03:20.183 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.183 LINK bdevperf 00:03:20.183 CC examples/sock/hello_world/hello_sock.o 00:03:20.183 LINK verify 00:03:20.183 LINK reconnect 00:03:20.183 CC test/blobfs/mkfs/mkfs.o 00:03:20.183 LINK hotplug 00:03:20.183 LINK nvme_manage 00:03:20.183 LINK cmb_copy 00:03:20.183 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:20.183 LINK arbitration 00:03:20.183 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:20.441 LINK mkfs 00:03:20.441 CC examples/nvme/abort/abort.o 00:03:20.441 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.441 LINK hello_sock 00:03:20.441 CC examples/vmd/lsvmd/lsvmd.o 00:03:20.441 CC examples/util/zipf/zipf.o 00:03:20.441 LINK pmr_persistence 00:03:20.441 CC examples/vmd/led/led.o 00:03:20.441 CC examples/nvmf/nvmf/nvmf.o 00:03:20.441 CC examples/thread/thread/thread_ex.o 00:03:20.441 LINK lsvmd 00:03:20.699 LINK zipf 00:03:20.699 LINK led 00:03:20.699 CC examples/idxd/perf/perf.o 00:03:20.699 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:20.699 LINK abort 00:03:20.699 TEST_HEADER include/spdk/accel.h 00:03:20.699 LINK vhost_fuzz 00:03:20.699 TEST_HEADER include/spdk/accel_module.h 00:03:20.699 TEST_HEADER include/spdk/assert.h 00:03:20.699 TEST_HEADER include/spdk/barrier.h 00:03:20.699 TEST_HEADER include/spdk/base64.h 00:03:20.699 LINK thread 00:03:20.699 TEST_HEADER include/spdk/bdev.h 00:03:20.699 TEST_HEADER include/spdk/bdev_module.h 00:03:20.699 TEST_HEADER include/spdk/bdev_zone.h 00:03:20.699 TEST_HEADER include/spdk/bit_array.h 00:03:20.699 TEST_HEADER include/spdk/bit_pool.h 00:03:20.699 TEST_HEADER include/spdk/blob_bdev.h 00:03:20.699 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:20.699 TEST_HEADER include/spdk/blobfs.h 00:03:20.699 TEST_HEADER include/spdk/blob.h 00:03:20.699 TEST_HEADER include/spdk/conf.h 00:03:20.699 TEST_HEADER include/spdk/config.h 00:03:20.699 TEST_HEADER include/spdk/cpuset.h 00:03:20.699 TEST_HEADER include/spdk/crc16.h 00:03:20.699 TEST_HEADER include/spdk/crc32.h 00:03:20.699 TEST_HEADER include/spdk/crc64.h 00:03:20.699 TEST_HEADER include/spdk/dif.h 00:03:20.699 TEST_HEADER include/spdk/dma.h 00:03:20.699 TEST_HEADER include/spdk/endian.h 00:03:20.699 LINK nvmf 00:03:20.699 TEST_HEADER include/spdk/env_dpdk.h 00:03:20.699 TEST_HEADER include/spdk/env.h 00:03:20.699 LINK interrupt_tgt 00:03:20.699 TEST_HEADER include/spdk/event.h 00:03:20.699 TEST_HEADER include/spdk/fd_group.h 00:03:20.699 TEST_HEADER include/spdk/fd.h 00:03:20.699 TEST_HEADER include/spdk/file.h 00:03:20.699 TEST_HEADER include/spdk/ftl.h 00:03:20.699 TEST_HEADER include/spdk/gpt_spec.h 00:03:20.699 TEST_HEADER include/spdk/hexlify.h 00:03:20.699 TEST_HEADER include/spdk/histogram_data.h 00:03:20.699 TEST_HEADER include/spdk/idxd.h 00:03:20.699 TEST_HEADER include/spdk/idxd_spec.h 00:03:20.699 TEST_HEADER include/spdk/init.h 00:03:20.699 TEST_HEADER include/spdk/ioat.h 00:03:20.699 TEST_HEADER include/spdk/ioat_spec.h 00:03:20.699 TEST_HEADER include/spdk/iscsi_spec.h 00:03:20.699 TEST_HEADER include/spdk/json.h 00:03:20.699 TEST_HEADER include/spdk/jsonrpc.h 00:03:20.699 TEST_HEADER include/spdk/likely.h 00:03:20.699 TEST_HEADER include/spdk/log.h 00:03:20.699 TEST_HEADER include/spdk/lvol.h 00:03:20.699 TEST_HEADER include/spdk/memory.h 00:03:20.699 TEST_HEADER include/spdk/mmio.h 00:03:20.699 TEST_HEADER include/spdk/nbd.h 00:03:20.699 TEST_HEADER include/spdk/notify.h 00:03:20.699 TEST_HEADER include/spdk/nvme.h 00:03:20.699 TEST_HEADER include/spdk/nvme_intel.h 00:03:20.699 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:20.699 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:20.699 TEST_HEADER include/spdk/nvme_spec.h 00:03:20.699 TEST_HEADER include/spdk/nvme_zns.h 00:03:20.958 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:20.958 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:20.958 TEST_HEADER include/spdk/nvmf.h 00:03:20.958 TEST_HEADER include/spdk/nvmf_spec.h 00:03:20.958 TEST_HEADER include/spdk/nvmf_transport.h 00:03:20.958 TEST_HEADER include/spdk/opal.h 00:03:20.958 TEST_HEADER include/spdk/opal_spec.h 00:03:20.958 TEST_HEADER include/spdk/pci_ids.h 00:03:20.958 TEST_HEADER include/spdk/pipe.h 00:03:20.958 TEST_HEADER include/spdk/queue.h 00:03:20.958 TEST_HEADER include/spdk/reduce.h 00:03:20.958 CC test/dma/test_dma/test_dma.o 00:03:20.958 TEST_HEADER include/spdk/rpc.h 00:03:20.958 TEST_HEADER include/spdk/scheduler.h 00:03:20.958 TEST_HEADER include/spdk/scsi.h 00:03:20.958 CC test/env/mem_callbacks/mem_callbacks.o 00:03:20.958 TEST_HEADER include/spdk/scsi_spec.h 00:03:20.958 TEST_HEADER include/spdk/sock.h 00:03:20.958 TEST_HEADER include/spdk/stdinc.h 00:03:20.958 TEST_HEADER include/spdk/string.h 00:03:20.958 TEST_HEADER include/spdk/thread.h 00:03:20.958 TEST_HEADER include/spdk/trace.h 00:03:20.958 TEST_HEADER include/spdk/trace_parser.h 00:03:20.958 TEST_HEADER include/spdk/tree.h 00:03:20.958 TEST_HEADER include/spdk/ublk.h 00:03:20.958 TEST_HEADER include/spdk/util.h 00:03:20.958 TEST_HEADER include/spdk/uuid.h 00:03:20.958 TEST_HEADER include/spdk/version.h 00:03:20.958 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:20.958 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:20.958 TEST_HEADER include/spdk/vhost.h 00:03:20.958 TEST_HEADER include/spdk/vmd.h 00:03:20.958 TEST_HEADER include/spdk/xor.h 00:03:20.958 TEST_HEADER include/spdk/zipf.h 00:03:20.958 CXX test/cpp_headers/accel.o 00:03:20.958 LINK idxd_perf 00:03:20.958 CC test/event/event_perf/event_perf.o 00:03:20.958 CC test/event/reactor/reactor.o 00:03:20.958 CC test/env/vtophys/vtophys.o 00:03:20.958 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:20.958 CXX test/cpp_headers/accel_module.o 00:03:20.958 LINK iscsi_fuzz 00:03:20.958 CC test/lvol/esnap/esnap.o 00:03:20.958 CC test/env/memory/memory_ut.o 00:03:20.958 LINK reactor 00:03:21.216 LINK vtophys 00:03:21.216 LINK event_perf 00:03:21.216 LINK env_dpdk_post_init 00:03:21.216 CXX test/cpp_headers/assert.o 00:03:21.216 LINK test_dma 00:03:21.216 CC test/event/reactor_perf/reactor_perf.o 00:03:21.216 CXX test/cpp_headers/barrier.o 00:03:21.216 CC test/event/app_repeat/app_repeat.o 00:03:21.216 CC test/rpc_client/rpc_client_test.o 00:03:21.216 CC test/nvme/aer/aer.o 00:03:21.216 CC test/thread/poller_perf/poller_perf.o 00:03:21.216 LINK reactor_perf 00:03:21.216 LINK mem_callbacks 00:03:21.216 CXX test/cpp_headers/base64.o 00:03:21.474 CC test/nvme/reset/reset.o 00:03:21.474 LINK app_repeat 00:03:21.474 LINK rpc_client_test 00:03:21.474 LINK poller_perf 00:03:21.474 CXX test/cpp_headers/bdev.o 00:03:21.474 CC test/nvme/sgl/sgl.o 00:03:21.474 CC test/nvme/e2edp/nvme_dp.o 00:03:21.474 LINK aer 00:03:21.474 CXX test/cpp_headers/bdev_module.o 00:03:21.474 LINK reset 00:03:21.474 CC test/event/scheduler/scheduler.o 00:03:21.731 CC test/nvme/overhead/overhead.o 00:03:21.731 CC test/nvme/err_injection/err_injection.o 00:03:21.731 CXX test/cpp_headers/bdev_zone.o 00:03:21.731 LINK memory_ut 00:03:21.731 CC test/nvme/startup/startup.o 00:03:21.731 LINK nvme_dp 00:03:21.731 CXX test/cpp_headers/bit_array.o 00:03:21.731 LINK sgl 00:03:21.731 LINK err_injection 00:03:21.731 CXX test/cpp_headers/bit_pool.o 00:03:21.731 LINK scheduler 00:03:21.731 LINK startup 00:03:21.731 CC test/env/pci/pci_ut.o 00:03:21.731 CC test/nvme/reserve/reserve.o 00:03:21.989 CC test/nvme/simple_copy/simple_copy.o 00:03:21.989 LINK overhead 00:03:21.989 CXX test/cpp_headers/blob_bdev.o 00:03:21.989 CC test/nvme/connect_stress/connect_stress.o 00:03:21.989 CC test/nvme/boot_partition/boot_partition.o 00:03:21.989 CXX test/cpp_headers/blobfs_bdev.o 00:03:21.989 CC test/nvme/compliance/nvme_compliance.o 00:03:21.989 LINK boot_partition 00:03:21.989 LINK connect_stress 00:03:21.989 CC test/nvme/fused_ordering/fused_ordering.o 00:03:21.989 CXX test/cpp_headers/blobfs.o 00:03:21.989 LINK simple_copy 00:03:21.989 CXX test/cpp_headers/blob.o 00:03:21.989 LINK reserve 00:03:22.247 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:22.247 CXX test/cpp_headers/conf.o 00:03:22.247 LINK fused_ordering 00:03:22.247 CXX test/cpp_headers/config.o 00:03:22.247 LINK pci_ut 00:03:22.247 CC test/nvme/fdp/fdp.o 00:03:22.247 CXX test/cpp_headers/cpuset.o 00:03:22.247 CC test/nvme/cuse/cuse.o 00:03:22.247 CXX test/cpp_headers/crc16.o 00:03:22.247 CXX test/cpp_headers/crc32.o 00:03:22.247 CXX test/cpp_headers/crc64.o 00:03:22.247 LINK doorbell_aers 00:03:22.247 CXX test/cpp_headers/dif.o 00:03:22.247 CXX test/cpp_headers/dma.o 00:03:22.247 CXX test/cpp_headers/endian.o 00:03:22.247 LINK nvme_compliance 00:03:22.247 CXX test/cpp_headers/env_dpdk.o 00:03:22.247 CXX test/cpp_headers/env.o 00:03:22.247 CXX test/cpp_headers/event.o 00:03:22.506 CXX test/cpp_headers/fd_group.o 00:03:22.506 CXX test/cpp_headers/fd.o 00:03:22.506 CXX test/cpp_headers/file.o 00:03:22.506 CXX test/cpp_headers/ftl.o 00:03:22.506 CXX test/cpp_headers/gpt_spec.o 00:03:22.506 LINK fdp 00:03:22.506 CXX test/cpp_headers/hexlify.o 00:03:22.506 CXX test/cpp_headers/histogram_data.o 00:03:22.506 CXX test/cpp_headers/idxd.o 00:03:22.506 CXX test/cpp_headers/idxd_spec.o 00:03:22.506 CXX test/cpp_headers/init.o 00:03:22.506 CXX test/cpp_headers/ioat.o 00:03:22.506 CXX test/cpp_headers/ioat_spec.o 00:03:22.506 CXX test/cpp_headers/iscsi_spec.o 00:03:22.506 CXX test/cpp_headers/json.o 00:03:22.506 CXX test/cpp_headers/jsonrpc.o 00:03:22.506 CXX test/cpp_headers/likely.o 00:03:22.506 CXX test/cpp_headers/log.o 00:03:22.764 CXX test/cpp_headers/lvol.o 00:03:22.764 CXX test/cpp_headers/memory.o 00:03:22.764 CXX test/cpp_headers/mmio.o 00:03:22.764 CXX test/cpp_headers/nbd.o 00:03:22.764 CXX test/cpp_headers/notify.o 00:03:22.764 CXX test/cpp_headers/nvme.o 00:03:22.764 CXX test/cpp_headers/nvme_intel.o 00:03:22.764 CXX test/cpp_headers/nvme_ocssd.o 00:03:22.764 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:22.764 CXX test/cpp_headers/nvme_spec.o 00:03:22.764 CXX test/cpp_headers/nvme_zns.o 00:03:22.764 CXX test/cpp_headers/nvmf_cmd.o 00:03:22.764 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:22.764 CXX test/cpp_headers/nvmf.o 00:03:22.764 CXX test/cpp_headers/nvmf_spec.o 00:03:22.764 CXX test/cpp_headers/nvmf_transport.o 00:03:22.764 CXX test/cpp_headers/opal.o 00:03:23.022 CXX test/cpp_headers/opal_spec.o 00:03:23.022 CXX test/cpp_headers/pci_ids.o 00:03:23.022 CXX test/cpp_headers/pipe.o 00:03:23.022 CXX test/cpp_headers/queue.o 00:03:23.022 CXX test/cpp_headers/reduce.o 00:03:23.022 CXX test/cpp_headers/rpc.o 00:03:23.022 CXX test/cpp_headers/scheduler.o 00:03:23.022 CXX test/cpp_headers/scsi.o 00:03:23.022 CXX test/cpp_headers/scsi_spec.o 00:03:23.022 CXX test/cpp_headers/sock.o 00:03:23.022 CXX test/cpp_headers/stdinc.o 00:03:23.022 CXX test/cpp_headers/string.o 00:03:23.022 CXX test/cpp_headers/thread.o 00:03:23.022 LINK cuse 00:03:23.022 CXX test/cpp_headers/trace.o 00:03:23.022 CXX test/cpp_headers/trace_parser.o 00:03:23.022 CXX test/cpp_headers/tree.o 00:03:23.282 CXX test/cpp_headers/ublk.o 00:03:23.282 CXX test/cpp_headers/util.o 00:03:23.282 CXX test/cpp_headers/uuid.o 00:03:23.282 CXX test/cpp_headers/version.o 00:03:23.282 CXX test/cpp_headers/vfio_user_pci.o 00:03:23.282 CXX test/cpp_headers/vfio_user_spec.o 00:03:23.282 CXX test/cpp_headers/vhost.o 00:03:23.282 CXX test/cpp_headers/vmd.o 00:03:23.282 CXX test/cpp_headers/xor.o 00:03:23.282 CXX test/cpp_headers/zipf.o 00:03:25.180 LINK esnap 00:03:25.441 00:03:25.442 real 0m48.547s 00:03:25.442 user 4m45.931s 00:03:25.442 sys 1m1.213s 00:03:25.442 15:45:36 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:25.442 ************************************ 00:03:25.442 END TEST make 00:03:25.442 ************************************ 00:03:25.442 15:45:36 -- common/autotest_common.sh@10 -- $ set +x 00:03:25.703 15:45:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:25.703 15:45:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:25.703 15:45:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:25.703 15:45:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:25.703 15:45:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:25.703 15:45:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:25.703 15:45:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:25.703 15:45:36 -- scripts/common.sh@335 -- # IFS=.-: 00:03:25.703 15:45:36 -- scripts/common.sh@335 -- # read -ra ver1 00:03:25.703 15:45:36 -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.703 15:45:36 -- scripts/common.sh@336 -- # read -ra ver2 00:03:25.703 15:45:36 -- scripts/common.sh@337 -- # local 'op=<' 00:03:25.703 15:45:36 -- scripts/common.sh@339 -- # ver1_l=2 00:03:25.703 15:45:36 -- scripts/common.sh@340 -- # ver2_l=1 00:03:25.703 15:45:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:25.703 15:45:36 -- scripts/common.sh@343 -- # case "$op" in 00:03:25.703 15:45:36 -- scripts/common.sh@344 -- # : 1 00:03:25.703 15:45:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:25.703 15:45:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.703 15:45:36 -- scripts/common.sh@364 -- # decimal 1 00:03:25.703 15:45:36 -- scripts/common.sh@352 -- # local d=1 00:03:25.703 15:45:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.703 15:45:36 -- scripts/common.sh@354 -- # echo 1 00:03:25.703 15:45:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:25.703 15:45:36 -- scripts/common.sh@365 -- # decimal 2 00:03:25.703 15:45:36 -- scripts/common.sh@352 -- # local d=2 00:03:25.703 15:45:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.703 15:45:36 -- scripts/common.sh@354 -- # echo 2 00:03:25.703 15:45:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:25.703 15:45:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:25.703 15:45:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:25.703 15:45:36 -- scripts/common.sh@367 -- # return 0 00:03:25.703 15:45:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.703 15:45:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:25.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.703 --rc genhtml_branch_coverage=1 00:03:25.703 --rc genhtml_function_coverage=1 00:03:25.703 --rc genhtml_legend=1 00:03:25.703 --rc geninfo_all_blocks=1 00:03:25.703 --rc geninfo_unexecuted_blocks=1 00:03:25.703 00:03:25.703 ' 00:03:25.703 15:45:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:25.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.703 --rc genhtml_branch_coverage=1 00:03:25.703 --rc genhtml_function_coverage=1 00:03:25.703 --rc genhtml_legend=1 00:03:25.703 --rc geninfo_all_blocks=1 00:03:25.703 --rc geninfo_unexecuted_blocks=1 00:03:25.703 00:03:25.703 ' 00:03:25.703 15:45:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:25.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.703 --rc genhtml_branch_coverage=1 00:03:25.703 --rc genhtml_function_coverage=1 00:03:25.703 --rc genhtml_legend=1 00:03:25.703 --rc geninfo_all_blocks=1 00:03:25.703 --rc geninfo_unexecuted_blocks=1 00:03:25.703 00:03:25.703 ' 00:03:25.703 15:45:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:25.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.703 --rc genhtml_branch_coverage=1 00:03:25.703 --rc genhtml_function_coverage=1 00:03:25.703 --rc genhtml_legend=1 00:03:25.703 --rc geninfo_all_blocks=1 00:03:25.703 --rc geninfo_unexecuted_blocks=1 00:03:25.703 00:03:25.703 ' 00:03:25.703 15:45:36 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:25.703 15:45:36 -- nvmf/common.sh@7 -- # uname -s 00:03:25.703 15:45:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:25.703 15:45:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:25.703 15:45:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:25.703 15:45:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:25.703 15:45:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:25.703 15:45:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:25.703 15:45:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:25.703 15:45:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:25.703 15:45:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:25.703 15:45:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:25.703 15:45:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4930376-339e-47d0-8578-dd6c8fd2c062 00:03:25.703 15:45:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=a4930376-339e-47d0-8578-dd6c8fd2c062 00:03:25.703 15:45:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:25.703 15:45:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:25.703 15:45:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:25.703 15:45:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:25.703 15:45:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:25.703 15:45:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.703 15:45:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.703 15:45:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.703 15:45:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.703 15:45:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.703 15:45:36 -- paths/export.sh@5 -- # export PATH 00:03:25.703 15:45:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.703 15:45:36 -- nvmf/common.sh@46 -- # : 0 00:03:25.703 15:45:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:25.703 15:45:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:25.703 15:45:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:25.703 15:45:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:25.703 15:45:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.703 15:45:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:25.703 15:45:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:25.703 15:45:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:25.703 15:45:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.703 15:45:36 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.703 15:45:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.703 15:45:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:25.703 15:45:36 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.703 15:45:36 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.703 15:45:36 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.703 15:45:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.703 15:45:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:25.703 15:45:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:25.703 15:45:37 -- spdk/autotest.sh@48 -- # udevadm_pid=48140 00:03:25.703 15:45:37 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:25.703 15:45:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:25.703 15:45:37 -- spdk/autotest.sh@54 -- # echo 48159 00:03:25.703 15:45:37 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:25.703 15:45:37 -- spdk/autotest.sh@56 -- # echo 48164 00:03:25.703 15:45:37 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:25.703 15:45:37 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:25.703 15:45:37 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:25.703 15:45:37 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:25.703 15:45:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:25.703 15:45:37 -- common/autotest_common.sh@10 -- # set +x 00:03:25.703 15:45:37 -- spdk/autotest.sh@70 -- # create_test_list 00:03:25.703 15:45:37 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:25.703 15:45:37 -- common/autotest_common.sh@10 -- # set +x 00:03:25.703 15:45:37 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:25.703 15:45:37 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:25.703 15:45:37 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:25.703 15:45:37 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:25.703 15:45:37 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:25.703 15:45:37 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:25.703 15:45:37 -- common/autotest_common.sh@1450 -- # uname 00:03:25.703 15:45:37 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:25.703 15:45:37 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:25.703 15:45:37 -- common/autotest_common.sh@1470 -- # uname 00:03:25.703 15:45:37 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:25.703 15:45:37 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:25.703 15:45:37 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:25.703 lcov: LCOV version 1.15 00:03:25.965 15:45:37 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:34.106 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:34.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:34.106 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:34.106 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:34.106 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:34.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:56.172 15:46:04 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:56.172 15:46:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:56.172 15:46:04 -- common/autotest_common.sh@10 -- # set +x 00:03:56.172 15:46:04 -- spdk/autotest.sh@89 -- # rm -f 00:03:56.172 15:46:04 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:56.172 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.172 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:03:56.172 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:03:56.172 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:56.172 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:56.172 15:46:05 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:56.172 15:46:05 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:56.172 15:46:05 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:56.172 15:46:05 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:56.172 15:46:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.172 15:46:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:56.172 15:46:05 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:56.172 15:46:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.172 15:46:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.172 15:46:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.172 15:46:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:56.173 15:46:05 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:56.173 15:46:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:56.173 15:46:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.173 15:46:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.173 15:46:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:56.173 15:46:05 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:56.173 15:46:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:56.173 15:46:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.173 15:46:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.173 15:46:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:03:56.173 15:46:05 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:03:56.173 15:46:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:56.173 15:46:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.173 15:46:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.173 15:46:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:03:56.173 15:46:05 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:03:56.173 15:46:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:56.173 15:46:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.173 15:46:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.173 15:46:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:03:56.173 15:46:05 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:03:56.173 15:46:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:56.173 15:46:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.173 15:46:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.173 15:46:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:56.173 15:46:05 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:56.173 15:46:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:56.173 15:46:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.173 15:46:05 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:56.173 15:46:05 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:03:56.173 15:46:05 -- spdk/autotest.sh@108 -- # grep -v p 00:03:56.173 15:46:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:56.173 15:46:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:56.173 15:46:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:56.173 15:46:05 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:56.173 15:46:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:56.173 No valid GPT data, bailing 00:03:56.173 15:46:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:56.173 15:46:05 -- scripts/common.sh@393 -- # pt= 00:03:56.173 15:46:05 -- scripts/common.sh@394 -- # return 1 00:03:56.173 15:46:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:56.173 1+0 records in 00:03:56.173 1+0 records out 00:03:56.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295195 s, 35.5 MB/s 00:03:56.173 15:46:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:56.173 15:46:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:56.173 15:46:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:03:56.173 15:46:05 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:56.173 15:46:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:56.173 No valid GPT data, bailing 00:03:56.173 15:46:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:56.173 15:46:05 -- scripts/common.sh@393 -- # pt= 00:03:56.173 15:46:05 -- scripts/common.sh@394 -- # return 1 00:03:56.173 15:46:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:56.173 1+0 records in 00:03:56.173 1+0 records out 00:03:56.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00673661 s, 156 MB/s 00:03:56.173 15:46:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:56.173 15:46:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:56.173 15:46:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:03:56.173 15:46:05 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:03:56.173 15:46:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:56.173 No valid GPT data, bailing 00:03:56.173 15:46:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:56.173 15:46:05 -- scripts/common.sh@393 -- # pt= 00:03:56.173 15:46:05 -- scripts/common.sh@394 -- # return 1 00:03:56.173 15:46:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:56.173 1+0 records in 00:03:56.173 1+0 records out 00:03:56.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00705829 s, 149 MB/s 00:03:56.173 15:46:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:56.173 15:46:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:56.173 15:46:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n2 00:03:56.173 15:46:05 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:03:56.173 15:46:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:56.173 No valid GPT data, bailing 00:03:56.173 15:46:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:56.173 15:46:05 -- scripts/common.sh@393 -- # pt= 00:03:56.173 15:46:05 -- scripts/common.sh@394 -- # return 1 00:03:56.173 15:46:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:56.173 1+0 records in 00:03:56.173 1+0 records out 00:03:56.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00677245 s, 155 MB/s 00:03:56.173 15:46:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:56.173 15:46:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:56.173 15:46:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n3 00:03:56.173 15:46:05 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:03:56.173 15:46:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:56.173 No valid GPT data, bailing 00:03:56.173 15:46:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:56.173 15:46:05 -- scripts/common.sh@393 -- # pt= 00:03:56.173 15:46:05 -- scripts/common.sh@394 -- # return 1 00:03:56.173 15:46:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:56.173 1+0 records in 00:03:56.173 1+0 records out 00:03:56.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00565852 s, 185 MB/s 00:03:56.173 15:46:05 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:56.173 15:46:05 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:56.173 15:46:05 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:03:56.173 15:46:05 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:03:56.173 15:46:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:56.173 No valid GPT data, bailing 00:03:56.173 15:46:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:56.173 15:46:05 -- scripts/common.sh@393 -- # pt= 00:03:56.173 15:46:05 -- scripts/common.sh@394 -- # return 1 00:03:56.173 15:46:05 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:56.173 1+0 records in 00:03:56.173 1+0 records out 00:03:56.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00684799 s, 153 MB/s 00:03:56.173 15:46:05 -- spdk/autotest.sh@116 -- # sync 00:03:56.173 15:46:05 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:56.173 15:46:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:56.173 15:46:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:56.436 15:46:07 -- spdk/autotest.sh@122 -- # uname -s 00:03:56.436 15:46:07 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:56.436 15:46:07 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:56.436 15:46:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.436 15:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.436 15:46:07 -- common/autotest_common.sh@10 -- # set +x 00:03:56.436 ************************************ 00:03:56.436 START TEST setup.sh 00:03:56.436 ************************************ 00:03:56.436 15:46:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:56.436 * Looking for test storage... 00:03:56.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:56.436 15:46:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:56.436 15:46:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:56.436 15:46:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:56.436 15:46:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:56.436 15:46:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:56.436 15:46:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:56.436 15:46:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:56.436 15:46:07 -- scripts/common.sh@335 -- # IFS=.-: 00:03:56.436 15:46:07 -- scripts/common.sh@335 -- # read -ra ver1 00:03:56.436 15:46:07 -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.436 15:46:07 -- scripts/common.sh@336 -- # read -ra ver2 00:03:56.436 15:46:07 -- scripts/common.sh@337 -- # local 'op=<' 00:03:56.436 15:46:07 -- scripts/common.sh@339 -- # ver1_l=2 00:03:56.436 15:46:07 -- scripts/common.sh@340 -- # ver2_l=1 00:03:56.436 15:46:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:56.436 15:46:07 -- scripts/common.sh@343 -- # case "$op" in 00:03:56.436 15:46:07 -- scripts/common.sh@344 -- # : 1 00:03:56.436 15:46:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:56.436 15:46:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.436 15:46:07 -- scripts/common.sh@364 -- # decimal 1 00:03:56.436 15:46:07 -- scripts/common.sh@352 -- # local d=1 00:03:56.436 15:46:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.436 15:46:07 -- scripts/common.sh@354 -- # echo 1 00:03:56.436 15:46:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:56.436 15:46:07 -- scripts/common.sh@365 -- # decimal 2 00:03:56.436 15:46:07 -- scripts/common.sh@352 -- # local d=2 00:03:56.436 15:46:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.436 15:46:07 -- scripts/common.sh@354 -- # echo 2 00:03:56.436 15:46:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:56.436 15:46:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:56.436 15:46:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:56.436 15:46:07 -- scripts/common.sh@367 -- # return 0 00:03:56.436 15:46:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.436 15:46:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:56.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.436 --rc genhtml_branch_coverage=1 00:03:56.436 --rc genhtml_function_coverage=1 00:03:56.436 --rc genhtml_legend=1 00:03:56.436 --rc geninfo_all_blocks=1 00:03:56.436 --rc geninfo_unexecuted_blocks=1 00:03:56.436 00:03:56.436 ' 00:03:56.436 15:46:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:56.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.436 --rc genhtml_branch_coverage=1 00:03:56.436 --rc genhtml_function_coverage=1 00:03:56.436 --rc genhtml_legend=1 00:03:56.436 --rc geninfo_all_blocks=1 00:03:56.436 --rc geninfo_unexecuted_blocks=1 00:03:56.436 00:03:56.436 ' 00:03:56.436 15:46:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:56.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.436 --rc genhtml_branch_coverage=1 00:03:56.436 --rc genhtml_function_coverage=1 00:03:56.436 --rc genhtml_legend=1 00:03:56.436 --rc geninfo_all_blocks=1 00:03:56.436 --rc geninfo_unexecuted_blocks=1 00:03:56.436 00:03:56.436 ' 00:03:56.436 15:46:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:56.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.436 --rc genhtml_branch_coverage=1 00:03:56.436 --rc genhtml_function_coverage=1 00:03:56.436 --rc genhtml_legend=1 00:03:56.436 --rc geninfo_all_blocks=1 00:03:56.436 --rc geninfo_unexecuted_blocks=1 00:03:56.436 00:03:56.436 ' 00:03:56.436 15:46:07 -- setup/test-setup.sh@10 -- # uname -s 00:03:56.436 15:46:07 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:56.436 15:46:07 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:56.436 15:46:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.436 15:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.436 15:46:07 -- common/autotest_common.sh@10 -- # set +x 00:03:56.436 ************************************ 00:03:56.436 START TEST acl 00:03:56.436 ************************************ 00:03:56.436 15:46:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:56.699 * Looking for test storage... 00:03:56.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:56.699 15:46:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:56.699 15:46:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:56.699 15:46:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:56.699 15:46:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:56.699 15:46:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:56.699 15:46:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:56.699 15:46:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:56.699 15:46:07 -- scripts/common.sh@335 -- # IFS=.-: 00:03:56.699 15:46:07 -- scripts/common.sh@335 -- # read -ra ver1 00:03:56.699 15:46:07 -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.699 15:46:07 -- scripts/common.sh@336 -- # read -ra ver2 00:03:56.699 15:46:07 -- scripts/common.sh@337 -- # local 'op=<' 00:03:56.699 15:46:07 -- scripts/common.sh@339 -- # ver1_l=2 00:03:56.699 15:46:07 -- scripts/common.sh@340 -- # ver2_l=1 00:03:56.699 15:46:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:56.699 15:46:07 -- scripts/common.sh@343 -- # case "$op" in 00:03:56.699 15:46:07 -- scripts/common.sh@344 -- # : 1 00:03:56.699 15:46:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:56.699 15:46:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.699 15:46:07 -- scripts/common.sh@364 -- # decimal 1 00:03:56.699 15:46:07 -- scripts/common.sh@352 -- # local d=1 00:03:56.699 15:46:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.699 15:46:07 -- scripts/common.sh@354 -- # echo 1 00:03:56.699 15:46:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:56.699 15:46:07 -- scripts/common.sh@365 -- # decimal 2 00:03:56.699 15:46:07 -- scripts/common.sh@352 -- # local d=2 00:03:56.699 15:46:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.699 15:46:07 -- scripts/common.sh@354 -- # echo 2 00:03:56.699 15:46:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:56.699 15:46:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:56.700 15:46:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:56.700 15:46:07 -- scripts/common.sh@367 -- # return 0 00:03:56.700 15:46:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.700 15:46:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:56.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.700 --rc genhtml_branch_coverage=1 00:03:56.700 --rc genhtml_function_coverage=1 00:03:56.700 --rc genhtml_legend=1 00:03:56.700 --rc geninfo_all_blocks=1 00:03:56.700 --rc geninfo_unexecuted_blocks=1 00:03:56.700 00:03:56.700 ' 00:03:56.700 15:46:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:56.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.700 --rc genhtml_branch_coverage=1 00:03:56.700 --rc genhtml_function_coverage=1 00:03:56.700 --rc genhtml_legend=1 00:03:56.700 --rc geninfo_all_blocks=1 00:03:56.700 --rc geninfo_unexecuted_blocks=1 00:03:56.700 00:03:56.700 ' 00:03:56.700 15:46:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:56.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.700 --rc genhtml_branch_coverage=1 00:03:56.700 --rc genhtml_function_coverage=1 00:03:56.700 --rc genhtml_legend=1 00:03:56.700 --rc geninfo_all_blocks=1 00:03:56.700 --rc geninfo_unexecuted_blocks=1 00:03:56.700 00:03:56.700 ' 00:03:56.700 15:46:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:56.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.700 --rc genhtml_branch_coverage=1 00:03:56.700 --rc genhtml_function_coverage=1 00:03:56.700 --rc genhtml_legend=1 00:03:56.700 --rc geninfo_all_blocks=1 00:03:56.700 --rc geninfo_unexecuted_blocks=1 00:03:56.700 00:03:56.700 ' 00:03:56.700 15:46:07 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:56.700 15:46:07 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:56.700 15:46:07 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:56.700 15:46:07 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:56.700 15:46:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.700 15:46:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:56.700 15:46:07 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:56.700 15:46:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.700 15:46:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.700 15:46:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.700 15:46:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:56.700 15:46:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:56.700 15:46:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:56.700 15:46:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.700 15:46:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.700 15:46:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:56.700 15:46:07 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:56.700 15:46:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:56.700 15:46:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.700 15:46:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.700 15:46:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:03:56.700 15:46:07 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:03:56.700 15:46:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:56.700 15:46:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.700 15:46:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.700 15:46:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:03:56.700 15:46:07 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:03:56.700 15:46:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:56.700 15:46:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.700 15:46:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.700 15:46:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:03:56.700 15:46:07 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:03:56.700 15:46:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:56.700 15:46:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.700 15:46:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.700 15:46:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:56.700 15:46:08 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:56.700 15:46:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:56.700 15:46:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.700 15:46:08 -- setup/acl.sh@12 -- # devs=() 00:03:56.700 15:46:08 -- setup/acl.sh@12 -- # declare -a devs 00:03:56.700 15:46:08 -- setup/acl.sh@13 -- # drivers=() 00:03:56.700 15:46:08 -- setup/acl.sh@13 -- # declare -A drivers 00:03:56.700 15:46:08 -- setup/acl.sh@51 -- # setup reset 00:03:56.700 15:46:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.700 15:46:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:58.088 15:46:09 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:58.088 15:46:09 -- setup/acl.sh@16 -- # local dev driver 00:03:58.088 15:46:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.088 15:46:09 -- setup/acl.sh@15 -- # setup output status 00:03:58.088 15:46:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.088 15:46:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:58.088 Hugepages 00:03:58.088 node hugesize free / total 00:03:58.088 15:46:09 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:58.088 15:46:09 -- setup/acl.sh@19 -- # continue 00:03:58.088 15:46:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.088 00:03:58.088 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.088 15:46:09 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:58.088 15:46:09 -- setup/acl.sh@19 -- # continue 00:03:58.088 15:46:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.088 15:46:09 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:58.088 15:46:09 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:58.089 15:46:09 -- setup/acl.sh@20 -- # continue 00:03:58.089 15:46:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.089 15:46:09 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:58.089 15:46:09 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.089 15:46:09 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:58.089 15:46:09 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.089 15:46:09 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.089 15:46:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.089 15:46:09 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:58.089 15:46:09 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.089 15:46:09 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:58.089 15:46:09 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.089 15:46:09 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.089 15:46:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.351 15:46:09 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:03:58.351 15:46:09 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.351 15:46:09 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:58.351 15:46:09 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.351 15:46:09 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.351 15:46:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.351 15:46:09 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:03:58.351 15:46:09 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.351 15:46:09 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:58.351 15:46:09 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.351 15:46:09 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.351 15:46:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.351 15:46:09 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:58.351 15:46:09 -- setup/acl.sh@54 -- # run_test denied denied 00:03:58.351 15:46:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.351 15:46:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.351 15:46:09 -- common/autotest_common.sh@10 -- # set +x 00:03:58.351 ************************************ 00:03:58.351 START TEST denied 00:03:58.351 ************************************ 00:03:58.351 15:46:09 -- common/autotest_common.sh@1114 -- # denied 00:03:58.351 15:46:09 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:58.351 15:46:09 -- setup/acl.sh@38 -- # setup output config 00:03:58.351 15:46:09 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:58.351 15:46:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.352 15:46:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:59.739 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:59.739 15:46:10 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:59.739 15:46:10 -- setup/acl.sh@28 -- # local dev driver 00:03:59.739 15:46:10 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:59.739 15:46:10 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:59.739 15:46:10 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:59.739 15:46:10 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:59.739 15:46:10 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:59.739 15:46:10 -- setup/acl.sh@41 -- # setup reset 00:03:59.739 15:46:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.739 15:46:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.371 00:04:06.371 real 0m7.193s 00:04:06.371 user 0m0.755s 00:04:06.371 sys 0m1.281s 00:04:06.371 ************************************ 00:04:06.371 END TEST denied 00:04:06.371 ************************************ 00:04:06.371 15:46:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:06.371 15:46:16 -- common/autotest_common.sh@10 -- # set +x 00:04:06.371 15:46:16 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:06.371 15:46:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.371 15:46:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.371 15:46:16 -- common/autotest_common.sh@10 -- # set +x 00:04:06.371 ************************************ 00:04:06.371 START TEST allowed 00:04:06.371 ************************************ 00:04:06.371 15:46:16 -- common/autotest_common.sh@1114 -- # allowed 00:04:06.371 15:46:16 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:06.371 15:46:16 -- setup/acl.sh@45 -- # setup output config 00:04:06.371 15:46:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.371 15:46:16 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:06.371 15:46:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:06.629 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.629 15:46:17 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:06.629 15:46:17 -- setup/acl.sh@28 -- # local dev driver 00:04:06.629 15:46:17 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:06.629 15:46:17 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:06.629 15:46:17 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:06.629 15:46:17 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:06.629 15:46:17 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:06.629 15:46:17 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:06.629 15:46:17 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:06.629 15:46:17 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:06.629 15:46:17 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:06.629 15:46:17 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:06.629 15:46:17 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:06.629 15:46:17 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:06.629 15:46:17 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:06.629 15:46:17 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:06.629 15:46:17 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:06.629 15:46:17 -- setup/acl.sh@48 -- # setup reset 00:04:06.629 15:46:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.629 15:46:17 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.564 ************************************ 00:04:07.564 END TEST allowed 00:04:07.564 ************************************ 00:04:07.564 00:04:07.564 real 0m1.927s 00:04:07.564 user 0m0.843s 00:04:07.564 sys 0m0.951s 00:04:07.564 15:46:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.564 15:46:18 -- common/autotest_common.sh@10 -- # set +x 00:04:07.564 ************************************ 00:04:07.564 END TEST acl 00:04:07.564 ************************************ 00:04:07.564 00:04:07.564 real 0m11.026s 00:04:07.564 user 0m2.302s 00:04:07.564 sys 0m3.292s 00:04:07.564 15:46:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.564 15:46:18 -- common/autotest_common.sh@10 -- # set +x 00:04:07.564 15:46:18 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:07.564 15:46:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.564 15:46:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.564 15:46:18 -- common/autotest_common.sh@10 -- # set +x 00:04:07.564 ************************************ 00:04:07.564 START TEST hugepages 00:04:07.564 ************************************ 00:04:07.564 15:46:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:07.564 * Looking for test storage... 00:04:07.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:07.564 15:46:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:07.564 15:46:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:07.564 15:46:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:07.823 15:46:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:07.823 15:46:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:07.823 15:46:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:07.823 15:46:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:07.823 15:46:19 -- scripts/common.sh@335 -- # IFS=.-: 00:04:07.823 15:46:19 -- scripts/common.sh@335 -- # read -ra ver1 00:04:07.823 15:46:19 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.823 15:46:19 -- scripts/common.sh@336 -- # read -ra ver2 00:04:07.823 15:46:19 -- scripts/common.sh@337 -- # local 'op=<' 00:04:07.823 15:46:19 -- scripts/common.sh@339 -- # ver1_l=2 00:04:07.823 15:46:19 -- scripts/common.sh@340 -- # ver2_l=1 00:04:07.823 15:46:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:07.823 15:46:19 -- scripts/common.sh@343 -- # case "$op" in 00:04:07.823 15:46:19 -- scripts/common.sh@344 -- # : 1 00:04:07.823 15:46:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:07.823 15:46:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.823 15:46:19 -- scripts/common.sh@364 -- # decimal 1 00:04:07.823 15:46:19 -- scripts/common.sh@352 -- # local d=1 00:04:07.823 15:46:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.823 15:46:19 -- scripts/common.sh@354 -- # echo 1 00:04:07.823 15:46:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:07.823 15:46:19 -- scripts/common.sh@365 -- # decimal 2 00:04:07.823 15:46:19 -- scripts/common.sh@352 -- # local d=2 00:04:07.823 15:46:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.823 15:46:19 -- scripts/common.sh@354 -- # echo 2 00:04:07.823 15:46:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:07.823 15:46:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:07.823 15:46:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:07.823 15:46:19 -- scripts/common.sh@367 -- # return 0 00:04:07.823 15:46:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.823 15:46:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:07.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.823 --rc genhtml_branch_coverage=1 00:04:07.823 --rc genhtml_function_coverage=1 00:04:07.823 --rc genhtml_legend=1 00:04:07.823 --rc geninfo_all_blocks=1 00:04:07.823 --rc geninfo_unexecuted_blocks=1 00:04:07.823 00:04:07.823 ' 00:04:07.823 15:46:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:07.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.823 --rc genhtml_branch_coverage=1 00:04:07.823 --rc genhtml_function_coverage=1 00:04:07.823 --rc genhtml_legend=1 00:04:07.823 --rc geninfo_all_blocks=1 00:04:07.823 --rc geninfo_unexecuted_blocks=1 00:04:07.823 00:04:07.823 ' 00:04:07.823 15:46:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:07.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.823 --rc genhtml_branch_coverage=1 00:04:07.823 --rc genhtml_function_coverage=1 00:04:07.823 --rc genhtml_legend=1 00:04:07.823 --rc geninfo_all_blocks=1 00:04:07.823 --rc geninfo_unexecuted_blocks=1 00:04:07.823 00:04:07.823 ' 00:04:07.823 15:46:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:07.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.823 --rc genhtml_branch_coverage=1 00:04:07.823 --rc genhtml_function_coverage=1 00:04:07.823 --rc genhtml_legend=1 00:04:07.823 --rc geninfo_all_blocks=1 00:04:07.823 --rc geninfo_unexecuted_blocks=1 00:04:07.823 00:04:07.823 ' 00:04:07.823 15:46:19 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:07.823 15:46:19 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:07.823 15:46:19 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:07.823 15:46:19 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:07.823 15:46:19 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:07.823 15:46:19 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:07.823 15:46:19 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:07.823 15:46:19 -- setup/common.sh@18 -- # local node= 00:04:07.823 15:46:19 -- setup/common.sh@19 -- # local var val 00:04:07.823 15:46:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.823 15:46:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.823 15:46:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.823 15:46:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.823 15:46:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.823 15:46:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:46:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 5818692 kB' 'MemAvailable: 7374576 kB' 'Buffers: 2684 kB' 'Cached: 1768956 kB' 'SwapCached: 0 kB' 'Active: 465136 kB' 'Inactive: 1421884 kB' 'Active(anon): 125912 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421884 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 117068 kB' 'Mapped: 50732 kB' 'Shmem: 10532 kB' 'KReclaimable: 63852 kB' 'Slab: 161764 kB' 'SReclaimable: 63852 kB' 'SUnreclaim: 97912 kB' 'KernelStack: 6464 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12410000 kB' 'Committed_AS: 319084 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.823 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.823 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.824 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # continue 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.825 15:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.825 15:46:19 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.825 15:46:19 -- setup/common.sh@33 -- # echo 2048 00:04:07.825 15:46:19 -- setup/common.sh@33 -- # return 0 00:04:07.825 15:46:19 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:07.825 15:46:19 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:07.825 15:46:19 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:07.825 15:46:19 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:07.825 15:46:19 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:07.825 15:46:19 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:07.825 15:46:19 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:07.825 15:46:19 -- setup/hugepages.sh@207 -- # get_nodes 00:04:07.825 15:46:19 -- setup/hugepages.sh@27 -- # local node 00:04:07.825 15:46:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.825 15:46:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:07.825 15:46:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:07.825 15:46:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.825 15:46:19 -- setup/hugepages.sh@208 -- # clear_hp 00:04:07.825 15:46:19 -- setup/hugepages.sh@37 -- # local node hp 00:04:07.825 15:46:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.825 15:46:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.825 15:46:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.825 15:46:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.825 15:46:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.825 15:46:19 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:07.825 15:46:19 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:07.825 15:46:19 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:07.825 15:46:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.825 15:46:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.825 15:46:19 -- common/autotest_common.sh@10 -- # set +x 00:04:07.825 ************************************ 00:04:07.825 START TEST default_setup 00:04:07.825 ************************************ 00:04:07.825 15:46:19 -- common/autotest_common.sh@1114 -- # default_setup 00:04:07.825 15:46:19 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:07.825 15:46:19 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:07.825 15:46:19 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:07.825 15:46:19 -- setup/hugepages.sh@51 -- # shift 00:04:07.825 15:46:19 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:07.825 15:46:19 -- setup/hugepages.sh@52 -- # local node_ids 00:04:07.825 15:46:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.825 15:46:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:07.825 15:46:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:07.825 15:46:19 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:07.825 15:46:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.825 15:46:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:07.825 15:46:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:07.825 15:46:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.825 15:46:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.825 15:46:19 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:07.825 15:46:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.825 15:46:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:07.825 15:46:19 -- setup/hugepages.sh@73 -- # return 0 00:04:07.825 15:46:19 -- setup/hugepages.sh@137 -- # setup output 00:04:07.825 15:46:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.825 15:46:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:08.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.761 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.761 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.761 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.761 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.761 15:46:20 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:08.761 15:46:20 -- setup/hugepages.sh@89 -- # local node 00:04:08.761 15:46:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.761 15:46:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.761 15:46:20 -- setup/hugepages.sh@92 -- # local surp 00:04:08.761 15:46:20 -- setup/hugepages.sh@93 -- # local resv 00:04:08.761 15:46:20 -- setup/hugepages.sh@94 -- # local anon 00:04:08.761 15:46:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.761 15:46:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.761 15:46:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.761 15:46:20 -- setup/common.sh@18 -- # local node= 00:04:08.761 15:46:20 -- setup/common.sh@19 -- # local var val 00:04:08.761 15:46:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.761 15:46:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.761 15:46:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.761 15:46:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.761 15:46:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.761 15:46:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.761 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.761 15:46:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7941384 kB' 'MemAvailable: 9497044 kB' 'Buffers: 2684 kB' 'Cached: 1768944 kB' 'SwapCached: 0 kB' 'Active: 467584 kB' 'Inactive: 1421904 kB' 'Active(anon): 128360 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119452 kB' 'Mapped: 50816 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161528 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98160 kB' 'KernelStack: 6496 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:08.761 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.761 15:46:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.761 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.761 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.761 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.761 15:46:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.761 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.761 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.761 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.761 15:46:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.761 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.761 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.761 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.761 15:46:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.761 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.761 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.761 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.761 15:46:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.761 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.761 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.761 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.762 15:46:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.762 15:46:20 -- setup/common.sh@33 -- # echo 0 00:04:08.762 15:46:20 -- setup/common.sh@33 -- # return 0 00:04:08.762 15:46:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:08.762 15:46:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.762 15:46:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.762 15:46:20 -- setup/common.sh@18 -- # local node= 00:04:08.762 15:46:20 -- setup/common.sh@19 -- # local var val 00:04:08.762 15:46:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.762 15:46:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.762 15:46:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.762 15:46:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.762 15:46:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.762 15:46:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.762 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7941384 kB' 'MemAvailable: 9497044 kB' 'Buffers: 2684 kB' 'Cached: 1768944 kB' 'SwapCached: 0 kB' 'Active: 467360 kB' 'Inactive: 1421904 kB' 'Active(anon): 128136 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119228 kB' 'Mapped: 50816 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161528 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98160 kB' 'KernelStack: 6480 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.763 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.763 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.764 15:46:20 -- setup/common.sh@33 -- # echo 0 00:04:08.764 15:46:20 -- setup/common.sh@33 -- # return 0 00:04:08.764 15:46:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:08.764 15:46:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.764 15:46:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.764 15:46:20 -- setup/common.sh@18 -- # local node= 00:04:08.764 15:46:20 -- setup/common.sh@19 -- # local var val 00:04:08.764 15:46:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.764 15:46:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.764 15:46:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.764 15:46:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.764 15:46:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.764 15:46:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7941384 kB' 'MemAvailable: 9497044 kB' 'Buffers: 2684 kB' 'Cached: 1768944 kB' 'SwapCached: 0 kB' 'Active: 467300 kB' 'Inactive: 1421904 kB' 'Active(anon): 128076 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119164 kB' 'Mapped: 50736 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161452 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98084 kB' 'KernelStack: 6496 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.764 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.764 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.765 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.765 15:46:20 -- setup/common.sh@33 -- # echo 0 00:04:08.765 15:46:20 -- setup/common.sh@33 -- # return 0 00:04:08.765 nr_hugepages=1024 00:04:08.765 15:46:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:08.765 15:46:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.765 resv_hugepages=0 00:04:08.765 surplus_hugepages=0 00:04:08.765 15:46:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.765 15:46:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.765 anon_hugepages=0 00:04:08.765 15:46:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.765 15:46:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.765 15:46:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.765 15:46:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.765 15:46:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.765 15:46:20 -- setup/common.sh@18 -- # local node= 00:04:08.765 15:46:20 -- setup/common.sh@19 -- # local var val 00:04:08.765 15:46:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.765 15:46:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.765 15:46:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.765 15:46:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.765 15:46:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.765 15:46:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.765 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7941644 kB' 'MemAvailable: 9497304 kB' 'Buffers: 2684 kB' 'Cached: 1768944 kB' 'SwapCached: 0 kB' 'Active: 467052 kB' 'Inactive: 1421904 kB' 'Active(anon): 127828 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118652 kB' 'Mapped: 50736 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161448 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98080 kB' 'KernelStack: 6496 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 325036 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.766 15:46:20 -- setup/common.sh@32 -- # continue 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.766 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.025 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.025 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.026 15:46:20 -- setup/common.sh@33 -- # echo 1024 00:04:09.026 15:46:20 -- setup/common.sh@33 -- # return 0 00:04:09.026 15:46:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.026 15:46:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.026 15:46:20 -- setup/hugepages.sh@27 -- # local node 00:04:09.026 15:46:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.026 15:46:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.026 15:46:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:09.026 15:46:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.026 15:46:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.026 15:46:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.026 15:46:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.026 15:46:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.026 15:46:20 -- setup/common.sh@18 -- # local node=0 00:04:09.026 15:46:20 -- setup/common.sh@19 -- # local var val 00:04:09.026 15:46:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.026 15:46:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.026 15:46:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.026 15:46:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.026 15:46:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.026 15:46:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7941644 kB' 'MemUsed: 4295452 kB' 'SwapCached: 0 kB' 'Active: 467336 kB' 'Inactive: 1421904 kB' 'Active(anon): 128112 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 1771628 kB' 'Mapped: 50848 kB' 'AnonPages: 118980 kB' 'Shmem: 10492 kB' 'KernelStack: 6528 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63368 kB' 'Slab: 161444 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.026 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.026 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.027 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.027 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.027 15:46:20 -- setup/common.sh@33 -- # echo 0 00:04:09.027 15:46:20 -- setup/common.sh@33 -- # return 0 00:04:09.027 15:46:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.027 15:46:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.027 15:46:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.027 15:46:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.027 node0=1024 expecting 1024 00:04:09.027 15:46:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:09.027 15:46:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.027 00:04:09.027 real 0m1.095s 00:04:09.027 user 0m0.455s 00:04:09.027 sys 0m0.582s 00:04:09.027 15:46:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.027 ************************************ 00:04:09.027 15:46:20 -- common/autotest_common.sh@10 -- # set +x 00:04:09.027 END TEST default_setup 00:04:09.027 ************************************ 00:04:09.027 15:46:20 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:09.027 15:46:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.027 15:46:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.027 15:46:20 -- common/autotest_common.sh@10 -- # set +x 00:04:09.027 ************************************ 00:04:09.027 START TEST per_node_1G_alloc 00:04:09.027 ************************************ 00:04:09.027 15:46:20 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:09.027 15:46:20 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:09.027 15:46:20 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:09.027 15:46:20 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:09.027 15:46:20 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:09.027 15:46:20 -- setup/hugepages.sh@51 -- # shift 00:04:09.027 15:46:20 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:09.027 15:46:20 -- setup/hugepages.sh@52 -- # local node_ids 00:04:09.027 15:46:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.027 15:46:20 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:09.027 15:46:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:09.027 15:46:20 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:09.027 15:46:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.027 15:46:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:09.027 15:46:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:09.027 15:46:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.027 15:46:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.027 15:46:20 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:09.027 15:46:20 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:09.027 15:46:20 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:09.027 15:46:20 -- setup/hugepages.sh@73 -- # return 0 00:04:09.027 15:46:20 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:09.027 15:46:20 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:09.027 15:46:20 -- setup/hugepages.sh@146 -- # setup output 00:04:09.027 15:46:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.027 15:46:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.287 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.287 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.287 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.287 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.287 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.287 15:46:20 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:09.287 15:46:20 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:09.287 15:46:20 -- setup/hugepages.sh@89 -- # local node 00:04:09.287 15:46:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.287 15:46:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.287 15:46:20 -- setup/hugepages.sh@92 -- # local surp 00:04:09.287 15:46:20 -- setup/hugepages.sh@93 -- # local resv 00:04:09.287 15:46:20 -- setup/hugepages.sh@94 -- # local anon 00:04:09.287 15:46:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.287 15:46:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.287 15:46:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.287 15:46:20 -- setup/common.sh@18 -- # local node= 00:04:09.287 15:46:20 -- setup/common.sh@19 -- # local var val 00:04:09.287 15:46:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.287 15:46:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.287 15:46:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.287 15:46:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.287 15:46:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.287 15:46:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.287 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.287 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.287 15:46:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8993368 kB' 'MemAvailable: 10549040 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467396 kB' 'Inactive: 1421916 kB' 'Active(anon): 128172 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119256 kB' 'Mapped: 50840 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161460 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98092 kB' 'KernelStack: 6496 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:09.287 15:46:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.287 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.287 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.287 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.287 15:46:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.287 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.287 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.287 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.287 15:46:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.287 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.287 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.287 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.287 15:46:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.287 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.287 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.287 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.288 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.288 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.289 15:46:20 -- setup/common.sh@33 -- # echo 0 00:04:09.289 15:46:20 -- setup/common.sh@33 -- # return 0 00:04:09.289 15:46:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:09.289 15:46:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.289 15:46:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.289 15:46:20 -- setup/common.sh@18 -- # local node= 00:04:09.289 15:46:20 -- setup/common.sh@19 -- # local var val 00:04:09.289 15:46:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.289 15:46:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.289 15:46:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.289 15:46:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.289 15:46:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.289 15:46:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8993368 kB' 'MemAvailable: 10549040 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467212 kB' 'Inactive: 1421916 kB' 'Active(anon): 127988 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119032 kB' 'Mapped: 50736 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161460 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98092 kB' 'KernelStack: 6480 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.289 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.289 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.290 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.290 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.291 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.291 15:46:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.291 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.291 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.291 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.291 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.291 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.291 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.291 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.291 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.291 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.291 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.291 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.291 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.291 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.291 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.291 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.291 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.553 15:46:20 -- setup/common.sh@33 -- # echo 0 00:04:09.553 15:46:20 -- setup/common.sh@33 -- # return 0 00:04:09.553 15:46:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:09.553 15:46:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.553 15:46:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.553 15:46:20 -- setup/common.sh@18 -- # local node= 00:04:09.553 15:46:20 -- setup/common.sh@19 -- # local var val 00:04:09.553 15:46:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.553 15:46:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.553 15:46:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.553 15:46:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.553 15:46:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.553 15:46:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.553 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.553 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.553 15:46:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8993368 kB' 'MemAvailable: 10549040 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467176 kB' 'Inactive: 1421916 kB' 'Active(anon): 127952 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118996 kB' 'Mapped: 50736 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161452 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98084 kB' 'KernelStack: 6464 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:09.553 15:46:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.553 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.553 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.553 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.553 15:46:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.553 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.553 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.553 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.553 15:46:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.553 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.553 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.553 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.553 15:46:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.553 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.553 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.553 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.553 15:46:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.553 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.553 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.553 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.553 15:46:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.553 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.553 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.553 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.553 15:46:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.554 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.554 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.555 15:46:20 -- setup/common.sh@33 -- # echo 0 00:04:09.555 15:46:20 -- setup/common.sh@33 -- # return 0 00:04:09.555 15:46:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:09.555 15:46:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:09.555 nr_hugepages=512 00:04:09.555 resv_hugepages=0 00:04:09.555 15:46:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.555 surplus_hugepages=0 00:04:09.555 15:46:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.555 anon_hugepages=0 00:04:09.555 15:46:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.555 15:46:20 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:09.555 15:46:20 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:09.555 15:46:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.555 15:46:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.555 15:46:20 -- setup/common.sh@18 -- # local node= 00:04:09.555 15:46:20 -- setup/common.sh@19 -- # local var val 00:04:09.555 15:46:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.555 15:46:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.555 15:46:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.555 15:46:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.555 15:46:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.555 15:46:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8993368 kB' 'MemAvailable: 10549040 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 466964 kB' 'Inactive: 1421916 kB' 'Active(anon): 127740 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118832 kB' 'Mapped: 50736 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161452 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98084 kB' 'KernelStack: 6464 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.555 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.555 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.556 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.556 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.557 15:46:20 -- setup/common.sh@33 -- # echo 512 00:04:09.557 15:46:20 -- setup/common.sh@33 -- # return 0 00:04:09.557 15:46:20 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:09.557 15:46:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.557 15:46:20 -- setup/hugepages.sh@27 -- # local node 00:04:09.557 15:46:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.557 15:46:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.557 15:46:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:09.557 15:46:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.557 15:46:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.557 15:46:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.557 15:46:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.557 15:46:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.557 15:46:20 -- setup/common.sh@18 -- # local node=0 00:04:09.557 15:46:20 -- setup/common.sh@19 -- # local var val 00:04:09.557 15:46:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.557 15:46:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.557 15:46:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.557 15:46:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.557 15:46:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.557 15:46:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8993368 kB' 'MemUsed: 3243728 kB' 'SwapCached: 0 kB' 'Active: 467224 kB' 'Inactive: 1421916 kB' 'Active(anon): 128000 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 1771632 kB' 'Mapped: 50736 kB' 'AnonPages: 119092 kB' 'Shmem: 10492 kB' 'KernelStack: 6532 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63368 kB' 'Slab: 161452 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.557 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.557 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # continue 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.558 15:46:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.558 15:46:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.558 15:46:20 -- setup/common.sh@33 -- # echo 0 00:04:09.558 15:46:20 -- setup/common.sh@33 -- # return 0 00:04:09.558 15:46:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.558 15:46:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.559 15:46:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.559 15:46:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.559 node0=512 expecting 512 00:04:09.559 15:46:20 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:09.559 15:46:20 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:09.559 00:04:09.559 real 0m0.521s 00:04:09.559 user 0m0.242s 00:04:09.559 sys 0m0.314s 00:04:09.559 15:46:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.559 15:46:20 -- common/autotest_common.sh@10 -- # set +x 00:04:09.559 ************************************ 00:04:09.559 END TEST per_node_1G_alloc 00:04:09.559 ************************************ 00:04:09.559 15:46:20 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:09.559 15:46:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.559 15:46:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.559 15:46:20 -- common/autotest_common.sh@10 -- # set +x 00:04:09.559 ************************************ 00:04:09.559 START TEST even_2G_alloc 00:04:09.559 ************************************ 00:04:09.559 15:46:20 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:09.559 15:46:20 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:09.559 15:46:20 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:09.559 15:46:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.559 15:46:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.559 15:46:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:09.559 15:46:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.559 15:46:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.559 15:46:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.559 15:46:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.559 15:46:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:09.559 15:46:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.559 15:46:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.559 15:46:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.559 15:46:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:09.559 15:46:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.559 15:46:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:09.559 15:46:20 -- setup/hugepages.sh@83 -- # : 0 00:04:09.559 15:46:20 -- setup/hugepages.sh@84 -- # : 0 00:04:09.559 15:46:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.559 15:46:20 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:09.559 15:46:20 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:09.559 15:46:20 -- setup/hugepages.sh@153 -- # setup output 00:04:09.559 15:46:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.559 15:46:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.817 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.817 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.817 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.817 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.080 15:46:21 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:10.080 15:46:21 -- setup/hugepages.sh@89 -- # local node 00:04:10.080 15:46:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.080 15:46:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.080 15:46:21 -- setup/hugepages.sh@92 -- # local surp 00:04:10.080 15:46:21 -- setup/hugepages.sh@93 -- # local resv 00:04:10.080 15:46:21 -- setup/hugepages.sh@94 -- # local anon 00:04:10.080 15:46:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.080 15:46:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.080 15:46:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.080 15:46:21 -- setup/common.sh@18 -- # local node= 00:04:10.080 15:46:21 -- setup/common.sh@19 -- # local var val 00:04:10.080 15:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.080 15:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.080 15:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.080 15:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.080 15:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.080 15:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.080 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 15:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7953288 kB' 'MemAvailable: 9508960 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467984 kB' 'Inactive: 1421916 kB' 'Active(anon): 128760 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119460 kB' 'Mapped: 50808 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161360 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 97992 kB' 'KernelStack: 6504 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:10.080 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 15:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.080 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.080 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 15:46:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.080 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.080 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 15:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.080 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.080 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 15:46:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.080 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 15:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.081 15:46:21 -- setup/common.sh@33 -- # echo 0 00:04:10.081 15:46:21 -- setup/common.sh@33 -- # return 0 00:04:10.081 15:46:21 -- setup/hugepages.sh@97 -- # anon=0 00:04:10.081 15:46:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.081 15:46:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.081 15:46:21 -- setup/common.sh@18 -- # local node= 00:04:10.081 15:46:21 -- setup/common.sh@19 -- # local var val 00:04:10.081 15:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.081 15:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.081 15:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.081 15:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.081 15:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.081 15:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7953288 kB' 'MemAvailable: 9508960 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467764 kB' 'Inactive: 1421916 kB' 'Active(anon): 128540 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119180 kB' 'Mapped: 50756 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161368 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98000 kB' 'KernelStack: 6488 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.082 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.082 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.083 15:46:21 -- setup/common.sh@33 -- # echo 0 00:04:10.083 15:46:21 -- setup/common.sh@33 -- # return 0 00:04:10.083 15:46:21 -- setup/hugepages.sh@99 -- # surp=0 00:04:10.083 15:46:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.083 15:46:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.083 15:46:21 -- setup/common.sh@18 -- # local node= 00:04:10.083 15:46:21 -- setup/common.sh@19 -- # local var val 00:04:10.083 15:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.083 15:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.083 15:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.083 15:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.083 15:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.083 15:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7953288 kB' 'MemAvailable: 9508960 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467640 kB' 'Inactive: 1421916 kB' 'Active(anon): 128416 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119060 kB' 'Mapped: 50756 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161368 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98000 kB' 'KernelStack: 6504 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.083 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.083 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.084 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.084 15:46:21 -- setup/common.sh@33 -- # echo 0 00:04:10.084 15:46:21 -- setup/common.sh@33 -- # return 0 00:04:10.084 15:46:21 -- setup/hugepages.sh@100 -- # resv=0 00:04:10.084 nr_hugepages=1024 00:04:10.084 15:46:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.084 resv_hugepages=0 00:04:10.084 surplus_hugepages=0 00:04:10.084 15:46:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.084 15:46:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.084 anon_hugepages=0 00:04:10.084 15:46:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.084 15:46:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.084 15:46:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.084 15:46:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.084 15:46:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.084 15:46:21 -- setup/common.sh@18 -- # local node= 00:04:10.084 15:46:21 -- setup/common.sh@19 -- # local var val 00:04:10.084 15:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.084 15:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.084 15:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.084 15:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.084 15:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.084 15:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.084 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7953288 kB' 'MemAvailable: 9508960 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467668 kB' 'Inactive: 1421916 kB' 'Active(anon): 128444 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119312 kB' 'Mapped: 50628 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161392 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98024 kB' 'KernelStack: 6480 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.085 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.085 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.086 15:46:21 -- setup/common.sh@33 -- # echo 1024 00:04:10.086 15:46:21 -- setup/common.sh@33 -- # return 0 00:04:10.086 15:46:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.086 15:46:21 -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.086 15:46:21 -- setup/hugepages.sh@27 -- # local node 00:04:10.086 15:46:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.086 15:46:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.086 15:46:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:10.086 15:46:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.086 15:46:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.086 15:46:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.086 15:46:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.086 15:46:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.086 15:46:21 -- setup/common.sh@18 -- # local node=0 00:04:10.086 15:46:21 -- setup/common.sh@19 -- # local var val 00:04:10.086 15:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.086 15:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.086 15:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.086 15:46:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.086 15:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.086 15:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7953288 kB' 'MemUsed: 4283808 kB' 'SwapCached: 0 kB' 'Active: 467400 kB' 'Inactive: 1421916 kB' 'Active(anon): 128176 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 1771632 kB' 'Mapped: 50628 kB' 'AnonPages: 119044 kB' 'Shmem: 10492 kB' 'KernelStack: 6464 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63368 kB' 'Slab: 161380 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.086 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.086 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.087 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.087 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.087 15:46:21 -- setup/common.sh@33 -- # echo 0 00:04:10.087 15:46:21 -- setup/common.sh@33 -- # return 0 00:04:10.087 15:46:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.087 15:46:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.087 15:46:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.087 15:46:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.087 node0=1024 expecting 1024 00:04:10.087 15:46:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.087 15:46:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.087 00:04:10.087 real 0m0.530s 00:04:10.087 user 0m0.238s 00:04:10.087 sys 0m0.319s 00:04:10.087 15:46:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:10.087 15:46:21 -- common/autotest_common.sh@10 -- # set +x 00:04:10.087 ************************************ 00:04:10.087 END TEST even_2G_alloc 00:04:10.087 ************************************ 00:04:10.087 15:46:21 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:10.087 15:46:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.087 15:46:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.087 15:46:21 -- common/autotest_common.sh@10 -- # set +x 00:04:10.087 ************************************ 00:04:10.087 START TEST odd_alloc 00:04:10.087 ************************************ 00:04:10.087 15:46:21 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:10.087 15:46:21 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:10.087 15:46:21 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:10.087 15:46:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.087 15:46:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.087 15:46:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:10.087 15:46:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.087 15:46:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.087 15:46:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.087 15:46:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:10.087 15:46:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:10.087 15:46:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.087 15:46:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.087 15:46:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.087 15:46:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.087 15:46:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.087 15:46:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:10.087 15:46:21 -- setup/hugepages.sh@83 -- # : 0 00:04:10.087 15:46:21 -- setup/hugepages.sh@84 -- # : 0 00:04:10.087 15:46:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.087 15:46:21 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:10.087 15:46:21 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:10.087 15:46:21 -- setup/hugepages.sh@160 -- # setup output 00:04:10.087 15:46:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.087 15:46:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.673 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.673 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.673 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.673 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.673 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.673 15:46:21 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:10.673 15:46:21 -- setup/hugepages.sh@89 -- # local node 00:04:10.673 15:46:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.673 15:46:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.673 15:46:21 -- setup/hugepages.sh@92 -- # local surp 00:04:10.673 15:46:21 -- setup/hugepages.sh@93 -- # local resv 00:04:10.673 15:46:21 -- setup/hugepages.sh@94 -- # local anon 00:04:10.673 15:46:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.673 15:46:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.673 15:46:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.673 15:46:21 -- setup/common.sh@18 -- # local node= 00:04:10.673 15:46:21 -- setup/common.sh@19 -- # local var val 00:04:10.673 15:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.673 15:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.673 15:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.673 15:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.673 15:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.673 15:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.673 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7953372 kB' 'MemAvailable: 9509044 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467880 kB' 'Inactive: 1421916 kB' 'Active(anon): 128656 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119488 kB' 'Mapped: 50864 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161368 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98000 kB' 'KernelStack: 6524 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 325196 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:10.673 15:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.673 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.673 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:46:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.673 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.673 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.673 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.673 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:46:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.673 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.673 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:46:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.673 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.673 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.673 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.673 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.673 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.673 15:46:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.673 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.674 15:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.674 15:46:21 -- setup/common.sh@33 -- # echo 0 00:04:10.674 15:46:21 -- setup/common.sh@33 -- # return 0 00:04:10.674 15:46:21 -- setup/hugepages.sh@97 -- # anon=0 00:04:10.674 15:46:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.674 15:46:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.674 15:46:21 -- setup/common.sh@18 -- # local node= 00:04:10.674 15:46:21 -- setup/common.sh@19 -- # local var val 00:04:10.674 15:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.674 15:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.674 15:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.674 15:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.674 15:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.674 15:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.674 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7953608 kB' 'MemAvailable: 9509280 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467196 kB' 'Inactive: 1421916 kB' 'Active(anon): 127972 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119120 kB' 'Mapped: 50552 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161376 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98008 kB' 'KernelStack: 6492 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 325172 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.675 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.675 15:46:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.676 15:46:21 -- setup/common.sh@33 -- # echo 0 00:04:10.676 15:46:21 -- setup/common.sh@33 -- # return 0 00:04:10.676 15:46:21 -- setup/hugepages.sh@99 -- # surp=0 00:04:10.676 15:46:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.676 15:46:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.676 15:46:21 -- setup/common.sh@18 -- # local node= 00:04:10.676 15:46:21 -- setup/common.sh@19 -- # local var val 00:04:10.676 15:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.676 15:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.676 15:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.676 15:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.676 15:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.676 15:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7953860 kB' 'MemAvailable: 9509532 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467388 kB' 'Inactive: 1421916 kB' 'Active(anon): 128164 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119072 kB' 'Mapped: 50552 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161372 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98004 kB' 'KernelStack: 6492 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.676 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.676 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.677 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.677 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.677 15:46:21 -- setup/common.sh@33 -- # echo 0 00:04:10.677 15:46:21 -- setup/common.sh@33 -- # return 0 00:04:10.677 nr_hugepages=1025 00:04:10.677 15:46:21 -- setup/hugepages.sh@100 -- # resv=0 00:04:10.677 15:46:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:10.677 resv_hugepages=0 00:04:10.677 surplus_hugepages=0 00:04:10.677 anon_hugepages=0 00:04:10.677 15:46:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.677 15:46:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.677 15:46:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.677 15:46:21 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.677 15:46:21 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:10.677 15:46:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.677 15:46:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.677 15:46:21 -- setup/common.sh@18 -- # local node= 00:04:10.677 15:46:21 -- setup/common.sh@19 -- # local var val 00:04:10.677 15:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.677 15:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.677 15:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.677 15:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.677 15:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.677 15:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.677 15:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7953888 kB' 'MemAvailable: 9509560 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467216 kB' 'Inactive: 1421916 kB' 'Active(anon): 127992 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119084 kB' 'Mapped: 50736 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161400 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98032 kB' 'KernelStack: 6448 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.678 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.678 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.679 15:46:21 -- setup/common.sh@33 -- # echo 1025 00:04:10.679 15:46:21 -- setup/common.sh@33 -- # return 0 00:04:10.679 15:46:21 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.679 15:46:21 -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.679 15:46:21 -- setup/hugepages.sh@27 -- # local node 00:04:10.679 15:46:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.679 15:46:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:10.679 15:46:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:10.679 15:46:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.679 15:46:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.679 15:46:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.679 15:46:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.679 15:46:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.679 15:46:21 -- setup/common.sh@18 -- # local node=0 00:04:10.679 15:46:21 -- setup/common.sh@19 -- # local var val 00:04:10.679 15:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.679 15:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.679 15:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.679 15:46:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.679 15:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.679 15:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7953888 kB' 'MemUsed: 4283208 kB' 'SwapCached: 0 kB' 'Active: 466988 kB' 'Inactive: 1421916 kB' 'Active(anon): 127764 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 1771632 kB' 'Mapped: 50736 kB' 'AnonPages: 118856 kB' 'Shmem: 10492 kB' 'KernelStack: 6416 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63368 kB' 'Slab: 161400 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.679 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.679 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # continue 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.680 15:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.680 15:46:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.680 15:46:21 -- setup/common.sh@33 -- # echo 0 00:04:10.680 15:46:21 -- setup/common.sh@33 -- # return 0 00:04:10.680 15:46:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.680 15:46:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.680 node0=1025 expecting 1025 00:04:10.680 15:46:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.680 15:46:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.680 15:46:21 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:10.680 15:46:21 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:10.680 00:04:10.680 real 0m0.558s 00:04:10.680 user 0m0.211s 00:04:10.680 sys 0m0.370s 00:04:10.680 15:46:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:10.680 15:46:21 -- common/autotest_common.sh@10 -- # set +x 00:04:10.680 ************************************ 00:04:10.680 END TEST odd_alloc 00:04:10.680 ************************************ 00:04:10.680 15:46:21 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:10.680 15:46:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.680 15:46:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.680 15:46:21 -- common/autotest_common.sh@10 -- # set +x 00:04:10.680 ************************************ 00:04:10.680 START TEST custom_alloc 00:04:10.680 ************************************ 00:04:10.680 15:46:22 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:10.680 15:46:22 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:10.680 15:46:22 -- setup/hugepages.sh@169 -- # local node 00:04:10.680 15:46:22 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:10.680 15:46:22 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:10.680 15:46:22 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:10.680 15:46:22 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:10.680 15:46:22 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:10.680 15:46:22 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.680 15:46:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.680 15:46:22 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:10.680 15:46:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.680 15:46:22 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.680 15:46:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.680 15:46:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:10.680 15:46:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:10.680 15:46:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.680 15:46:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.680 15:46:22 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.680 15:46:22 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.680 15:46:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.680 15:46:22 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:10.680 15:46:22 -- setup/hugepages.sh@83 -- # : 0 00:04:10.680 15:46:22 -- setup/hugepages.sh@84 -- # : 0 00:04:10.680 15:46:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.680 15:46:22 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:10.680 15:46:22 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:10.680 15:46:22 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:10.680 15:46:22 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:10.680 15:46:22 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:10.680 15:46:22 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:10.680 15:46:22 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.680 15:46:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.680 15:46:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:10.680 15:46:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:10.680 15:46:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.680 15:46:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.680 15:46:22 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.680 15:46:22 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:10.680 15:46:22 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.680 15:46:22 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:10.680 15:46:22 -- setup/hugepages.sh@78 -- # return 0 00:04:10.680 15:46:22 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:10.680 15:46:22 -- setup/hugepages.sh@187 -- # setup output 00:04:10.680 15:46:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.680 15:46:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.984 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.250 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.250 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.250 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.250 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.250 15:46:22 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:11.250 15:46:22 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:11.250 15:46:22 -- setup/hugepages.sh@89 -- # local node 00:04:11.250 15:46:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.250 15:46:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.250 15:46:22 -- setup/hugepages.sh@92 -- # local surp 00:04:11.250 15:46:22 -- setup/hugepages.sh@93 -- # local resv 00:04:11.250 15:46:22 -- setup/hugepages.sh@94 -- # local anon 00:04:11.250 15:46:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.250 15:46:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.250 15:46:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.250 15:46:22 -- setup/common.sh@18 -- # local node= 00:04:11.250 15:46:22 -- setup/common.sh@19 -- # local var val 00:04:11.250 15:46:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.250 15:46:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.250 15:46:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.250 15:46:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.250 15:46:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.250 15:46:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8999612 kB' 'MemAvailable: 10555284 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467488 kB' 'Inactive: 1421916 kB' 'Active(anon): 128264 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119308 kB' 'Mapped: 50704 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161420 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98052 kB' 'KernelStack: 6452 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.250 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.250 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.251 15:46:22 -- setup/common.sh@33 -- # echo 0 00:04:11.251 15:46:22 -- setup/common.sh@33 -- # return 0 00:04:11.251 15:46:22 -- setup/hugepages.sh@97 -- # anon=0 00:04:11.251 15:46:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.251 15:46:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.251 15:46:22 -- setup/common.sh@18 -- # local node= 00:04:11.251 15:46:22 -- setup/common.sh@19 -- # local var val 00:04:11.251 15:46:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.251 15:46:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.251 15:46:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.251 15:46:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.251 15:46:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.251 15:46:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8999612 kB' 'MemAvailable: 10555284 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467360 kB' 'Inactive: 1421916 kB' 'Active(anon): 128136 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119224 kB' 'Mapped: 50736 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161472 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98104 kB' 'KernelStack: 6480 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.251 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.251 15:46:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.252 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.252 15:46:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.252 15:46:22 -- setup/common.sh@33 -- # echo 0 00:04:11.253 15:46:22 -- setup/common.sh@33 -- # return 0 00:04:11.253 15:46:22 -- setup/hugepages.sh@99 -- # surp=0 00:04:11.253 15:46:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.253 15:46:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.253 15:46:22 -- setup/common.sh@18 -- # local node= 00:04:11.253 15:46:22 -- setup/common.sh@19 -- # local var val 00:04:11.253 15:46:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.253 15:46:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.253 15:46:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.253 15:46:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.253 15:46:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.253 15:46:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8999612 kB' 'MemAvailable: 10555284 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467500 kB' 'Inactive: 1421916 kB' 'Active(anon): 128276 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119356 kB' 'Mapped: 50736 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161472 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98104 kB' 'KernelStack: 6480 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.253 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.253 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.254 15:46:22 -- setup/common.sh@33 -- # echo 0 00:04:11.254 15:46:22 -- setup/common.sh@33 -- # return 0 00:04:11.254 15:46:22 -- setup/hugepages.sh@100 -- # resv=0 00:04:11.254 nr_hugepages=512 00:04:11.254 15:46:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:11.254 resv_hugepages=0 00:04:11.254 15:46:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.254 surplus_hugepages=0 00:04:11.254 15:46:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.254 anon_hugepages=0 00:04:11.254 15:46:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.254 15:46:22 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:11.254 15:46:22 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:11.254 15:46:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.254 15:46:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.254 15:46:22 -- setup/common.sh@18 -- # local node= 00:04:11.254 15:46:22 -- setup/common.sh@19 -- # local var val 00:04:11.254 15:46:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.254 15:46:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.254 15:46:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.254 15:46:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.254 15:46:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.254 15:46:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8999612 kB' 'MemAvailable: 10555284 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467404 kB' 'Inactive: 1421916 kB' 'Active(anon): 128180 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 119260 kB' 'Mapped: 50736 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161468 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98100 kB' 'KernelStack: 6448 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 325404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.254 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.254 15:46:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.255 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.255 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.256 15:46:22 -- setup/common.sh@33 -- # echo 512 00:04:11.256 15:46:22 -- setup/common.sh@33 -- # return 0 00:04:11.256 15:46:22 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:11.256 15:46:22 -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.256 15:46:22 -- setup/hugepages.sh@27 -- # local node 00:04:11.256 15:46:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.256 15:46:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:11.256 15:46:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.256 15:46:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.256 15:46:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.256 15:46:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.256 15:46:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.256 15:46:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.256 15:46:22 -- setup/common.sh@18 -- # local node=0 00:04:11.256 15:46:22 -- setup/common.sh@19 -- # local var val 00:04:11.256 15:46:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.256 15:46:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.256 15:46:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.256 15:46:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.256 15:46:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.256 15:46:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8999612 kB' 'MemUsed: 3237484 kB' 'SwapCached: 0 kB' 'Active: 467216 kB' 'Inactive: 1421916 kB' 'Active(anon): 127992 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'FilePages: 1771632 kB' 'Mapped: 50736 kB' 'AnonPages: 119128 kB' 'Shmem: 10492 kB' 'KernelStack: 6464 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63368 kB' 'Slab: 161468 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.256 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.256 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.257 15:46:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.257 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.257 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.257 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.257 15:46:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.257 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.257 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.257 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.257 15:46:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.257 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.257 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.257 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.257 15:46:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.257 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.257 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.257 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.257 15:46:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.257 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.257 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.257 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.257 15:46:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.257 15:46:22 -- setup/common.sh@32 -- # continue 00:04:11.257 15:46:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.257 15:46:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.257 15:46:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.257 15:46:22 -- setup/common.sh@33 -- # echo 0 00:04:11.257 15:46:22 -- setup/common.sh@33 -- # return 0 00:04:11.257 15:46:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.257 15:46:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.257 node0=512 expecting 512 00:04:11.257 15:46:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.257 15:46:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.257 15:46:22 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:11.257 15:46:22 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:11.257 00:04:11.257 real 0m0.570s 00:04:11.257 user 0m0.220s 00:04:11.257 sys 0m0.364s 00:04:11.257 15:46:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:11.257 ************************************ 00:04:11.257 END TEST custom_alloc 00:04:11.257 ************************************ 00:04:11.257 15:46:22 -- common/autotest_common.sh@10 -- # set +x 00:04:11.257 15:46:22 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:11.257 15:46:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.257 15:46:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.257 15:46:22 -- common/autotest_common.sh@10 -- # set +x 00:04:11.257 ************************************ 00:04:11.257 START TEST no_shrink_alloc 00:04:11.257 ************************************ 00:04:11.257 15:46:22 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:11.257 15:46:22 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:11.257 15:46:22 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.257 15:46:22 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:11.257 15:46:22 -- setup/hugepages.sh@51 -- # shift 00:04:11.257 15:46:22 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:11.257 15:46:22 -- setup/hugepages.sh@52 -- # local node_ids 00:04:11.257 15:46:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.257 15:46:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.257 15:46:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:11.257 15:46:22 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:11.257 15:46:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.257 15:46:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.257 15:46:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.257 15:46:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.257 15:46:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.257 15:46:22 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:11.257 15:46:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.257 15:46:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:11.257 15:46:22 -- setup/hugepages.sh@73 -- # return 0 00:04:11.257 15:46:22 -- setup/hugepages.sh@198 -- # setup output 00:04:11.257 15:46:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.257 15:46:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.832 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.832 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.832 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.832 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.832 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.832 15:46:23 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:11.832 15:46:23 -- setup/hugepages.sh@89 -- # local node 00:04:11.832 15:46:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.832 15:46:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.832 15:46:23 -- setup/hugepages.sh@92 -- # local surp 00:04:11.832 15:46:23 -- setup/hugepages.sh@93 -- # local resv 00:04:11.832 15:46:23 -- setup/hugepages.sh@94 -- # local anon 00:04:11.832 15:46:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.833 15:46:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.833 15:46:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.833 15:46:23 -- setup/common.sh@18 -- # local node= 00:04:11.833 15:46:23 -- setup/common.sh@19 -- # local var val 00:04:11.833 15:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.833 15:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.833 15:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.833 15:46:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.833 15:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.833 15:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7952552 kB' 'MemAvailable: 9508224 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467900 kB' 'Inactive: 1421916 kB' 'Active(anon): 128676 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119752 kB' 'Mapped: 50756 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161428 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98060 kB' 'KernelStack: 6492 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 325604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.833 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.833 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.834 15:46:23 -- setup/common.sh@33 -- # echo 0 00:04:11.834 15:46:23 -- setup/common.sh@33 -- # return 0 00:04:11.834 15:46:23 -- setup/hugepages.sh@97 -- # anon=0 00:04:11.834 15:46:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.834 15:46:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.834 15:46:23 -- setup/common.sh@18 -- # local node= 00:04:11.834 15:46:23 -- setup/common.sh@19 -- # local var val 00:04:11.834 15:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.834 15:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.834 15:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.834 15:46:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.834 15:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.834 15:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7952552 kB' 'MemAvailable: 9508224 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467560 kB' 'Inactive: 1421916 kB' 'Active(anon): 128336 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119640 kB' 'Mapped: 50824 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161416 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98048 kB' 'KernelStack: 6444 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 325604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.834 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.834 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.835 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.835 15:46:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 15:46:23 -- setup/common.sh@33 -- # echo 0 00:04:11.836 15:46:23 -- setup/common.sh@33 -- # return 0 00:04:11.836 15:46:23 -- setup/hugepages.sh@99 -- # surp=0 00:04:11.836 15:46:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.836 15:46:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.836 15:46:23 -- setup/common.sh@18 -- # local node= 00:04:11.836 15:46:23 -- setup/common.sh@19 -- # local var val 00:04:11.836 15:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.836 15:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.836 15:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.836 15:46:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.836 15:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.836 15:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7952552 kB' 'MemAvailable: 9508224 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 466964 kB' 'Inactive: 1421916 kB' 'Active(anon): 127740 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119096 kB' 'Mapped: 50736 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161460 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98092 kB' 'KernelStack: 6448 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 325604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.837 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.838 15:46:23 -- setup/common.sh@33 -- # echo 0 00:04:11.838 15:46:23 -- setup/common.sh@33 -- # return 0 00:04:11.838 nr_hugepages=1024 00:04:11.838 resv_hugepages=0 00:04:11.838 surplus_hugepages=0 00:04:11.838 15:46:23 -- setup/hugepages.sh@100 -- # resv=0 00:04:11.838 15:46:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.838 15:46:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.838 15:46:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.838 anon_hugepages=0 00:04:11.838 15:46:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.838 15:46:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.838 15:46:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.838 15:46:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.838 15:46:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.838 15:46:23 -- setup/common.sh@18 -- # local node= 00:04:11.838 15:46:23 -- setup/common.sh@19 -- # local var val 00:04:11.838 15:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.838 15:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.838 15:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.838 15:46:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.838 15:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.838 15:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7952552 kB' 'MemAvailable: 9508224 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 467040 kB' 'Inactive: 1421916 kB' 'Active(anon): 127816 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118900 kB' 'Mapped: 50736 kB' 'Shmem: 10492 kB' 'KReclaimable: 63368 kB' 'Slab: 161448 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98080 kB' 'KernelStack: 6480 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 325604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.838 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.838 15:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.839 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.839 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.840 15:46:23 -- setup/common.sh@33 -- # echo 1024 00:04:11.840 15:46:23 -- setup/common.sh@33 -- # return 0 00:04:11.840 15:46:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.840 15:46:23 -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.840 15:46:23 -- setup/hugepages.sh@27 -- # local node 00:04:11.840 15:46:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.840 15:46:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.840 15:46:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.840 15:46:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.840 15:46:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.840 15:46:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.840 15:46:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.840 15:46:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.840 15:46:23 -- setup/common.sh@18 -- # local node=0 00:04:11.840 15:46:23 -- setup/common.sh@19 -- # local var val 00:04:11.840 15:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.840 15:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.840 15:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.840 15:46:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.840 15:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.840 15:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7952552 kB' 'MemUsed: 4284544 kB' 'SwapCached: 0 kB' 'Active: 467004 kB' 'Inactive: 1421916 kB' 'Active(anon): 127780 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771632 kB' 'Mapped: 50736 kB' 'AnonPages: 119124 kB' 'Shmem: 10492 kB' 'KernelStack: 6464 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63368 kB' 'Slab: 161448 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 98080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.840 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.840 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # continue 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.841 15:46:23 -- setup/common.sh@33 -- # echo 0 00:04:11.841 15:46:23 -- setup/common.sh@33 -- # return 0 00:04:11.841 15:46:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.841 node0=1024 expecting 1024 00:04:11.841 15:46:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.841 15:46:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.841 15:46:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.841 15:46:23 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.841 15:46:23 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.841 15:46:23 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:11.841 15:46:23 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:11.841 15:46:23 -- setup/hugepages.sh@202 -- # setup output 00:04:11.841 15:46:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.841 15:46:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.414 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.414 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.414 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.414 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.414 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:12.414 15:46:23 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:12.414 15:46:23 -- setup/hugepages.sh@89 -- # local node 00:04:12.414 15:46:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.414 15:46:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.414 15:46:23 -- setup/hugepages.sh@92 -- # local surp 00:04:12.414 15:46:23 -- setup/hugepages.sh@93 -- # local resv 00:04:12.414 15:46:23 -- setup/hugepages.sh@94 -- # local anon 00:04:12.414 15:46:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.414 15:46:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.414 15:46:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.414 15:46:23 -- setup/common.sh@18 -- # local node= 00:04:12.414 15:46:23 -- setup/common.sh@19 -- # local var val 00:04:12.414 15:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.414 15:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.414 15:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.414 15:46:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.414 15:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.414 15:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 15:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7943988 kB' 'MemAvailable: 9499656 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 466004 kB' 'Inactive: 1421916 kB' 'Active(anon): 126780 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117836 kB' 'Mapped: 49996 kB' 'Shmem: 10492 kB' 'KReclaimable: 63356 kB' 'Slab: 161296 kB' 'SReclaimable: 63356 kB' 'SUnreclaim: 97940 kB' 'KernelStack: 6452 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 312484 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.414 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.415 15:46:23 -- setup/common.sh@33 -- # echo 0 00:04:12.415 15:46:23 -- setup/common.sh@33 -- # return 0 00:04:12.415 15:46:23 -- setup/hugepages.sh@97 -- # anon=0 00:04:12.415 15:46:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.415 15:46:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.415 15:46:23 -- setup/common.sh@18 -- # local node= 00:04:12.415 15:46:23 -- setup/common.sh@19 -- # local var val 00:04:12.415 15:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.415 15:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.415 15:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.415 15:46:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.415 15:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.415 15:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7943736 kB' 'MemAvailable: 9499404 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 465544 kB' 'Inactive: 1421916 kB' 'Active(anon): 126320 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117624 kB' 'Mapped: 49816 kB' 'Shmem: 10492 kB' 'KReclaimable: 63356 kB' 'Slab: 161312 kB' 'SReclaimable: 63356 kB' 'SUnreclaim: 97956 kB' 'KernelStack: 6416 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 312484 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.415 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.416 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.416 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.417 15:46:23 -- setup/common.sh@33 -- # echo 0 00:04:12.417 15:46:23 -- setup/common.sh@33 -- # return 0 00:04:12.417 15:46:23 -- setup/hugepages.sh@99 -- # surp=0 00:04:12.417 15:46:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.417 15:46:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.417 15:46:23 -- setup/common.sh@18 -- # local node= 00:04:12.417 15:46:23 -- setup/common.sh@19 -- # local var val 00:04:12.417 15:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.417 15:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.417 15:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.417 15:46:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.417 15:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.417 15:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7943736 kB' 'MemAvailable: 9499404 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 465476 kB' 'Inactive: 1421916 kB' 'Active(anon): 126252 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117296 kB' 'Mapped: 49872 kB' 'Shmem: 10492 kB' 'KReclaimable: 63356 kB' 'Slab: 161308 kB' 'SReclaimable: 63356 kB' 'SUnreclaim: 97952 kB' 'KernelStack: 6400 kB' 'PageTables: 3648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 312484 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.417 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.417 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.418 15:46:23 -- setup/common.sh@33 -- # echo 0 00:04:12.418 15:46:23 -- setup/common.sh@33 -- # return 0 00:04:12.418 nr_hugepages=1024 00:04:12.418 resv_hugepages=0 00:04:12.418 15:46:23 -- setup/hugepages.sh@100 -- # resv=0 00:04:12.418 15:46:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.418 15:46:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.418 surplus_hugepages=0 00:04:12.418 15:46:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.418 anon_hugepages=0 00:04:12.418 15:46:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.418 15:46:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.418 15:46:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.418 15:46:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.418 15:46:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.418 15:46:23 -- setup/common.sh@18 -- # local node= 00:04:12.418 15:46:23 -- setup/common.sh@19 -- # local var val 00:04:12.418 15:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.418 15:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.418 15:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.418 15:46:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.418 15:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.418 15:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7951276 kB' 'MemAvailable: 9506944 kB' 'Buffers: 2684 kB' 'Cached: 1768948 kB' 'SwapCached: 0 kB' 'Active: 465264 kB' 'Inactive: 1421916 kB' 'Active(anon): 126040 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117084 kB' 'Mapped: 49872 kB' 'Shmem: 10492 kB' 'KReclaimable: 63356 kB' 'Slab: 161244 kB' 'SReclaimable: 63356 kB' 'SUnreclaim: 97888 kB' 'KernelStack: 6384 kB' 'PageTables: 3600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 312484 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.418 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.418 15:46:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.419 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.419 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.420 15:46:23 -- setup/common.sh@33 -- # echo 1024 00:04:12.420 15:46:23 -- setup/common.sh@33 -- # return 0 00:04:12.420 15:46:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.420 15:46:23 -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.420 15:46:23 -- setup/hugepages.sh@27 -- # local node 00:04:12.420 15:46:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.420 15:46:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:12.420 15:46:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.420 15:46:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.420 15:46:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.420 15:46:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.420 15:46:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.420 15:46:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.420 15:46:23 -- setup/common.sh@18 -- # local node=0 00:04:12.420 15:46:23 -- setup/common.sh@19 -- # local var val 00:04:12.420 15:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.420 15:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.420 15:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.420 15:46:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.420 15:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.420 15:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7951276 kB' 'MemUsed: 4285820 kB' 'SwapCached: 0 kB' 'Active: 465108 kB' 'Inactive: 1421916 kB' 'Active(anon): 125884 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771632 kB' 'Mapped: 49916 kB' 'AnonPages: 117224 kB' 'Shmem: 10492 kB' 'KernelStack: 6400 kB' 'PageTables: 3648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63340 kB' 'Slab: 161196 kB' 'SReclaimable: 63340 kB' 'SUnreclaim: 97856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.420 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.420 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # continue 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 15:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 15:46:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.421 15:46:23 -- setup/common.sh@33 -- # echo 0 00:04:12.421 15:46:23 -- setup/common.sh@33 -- # return 0 00:04:12.421 15:46:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.421 15:46:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.421 15:46:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.421 15:46:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.421 15:46:23 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:12.421 node0=1024 expecting 1024 00:04:12.421 15:46:23 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:12.421 00:04:12.421 real 0m1.161s 00:04:12.421 user 0m0.511s 00:04:12.421 sys 0m0.683s 00:04:12.421 15:46:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:12.421 15:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:12.421 ************************************ 00:04:12.421 END TEST no_shrink_alloc 00:04:12.421 ************************************ 00:04:12.680 15:46:23 -- setup/hugepages.sh@217 -- # clear_hp 00:04:12.680 15:46:23 -- setup/hugepages.sh@37 -- # local node hp 00:04:12.680 15:46:23 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:12.680 15:46:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.680 15:46:23 -- setup/hugepages.sh@41 -- # echo 0 00:04:12.680 15:46:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.680 15:46:23 -- setup/hugepages.sh@41 -- # echo 0 00:04:12.680 15:46:23 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:12.680 15:46:23 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:12.680 ************************************ 00:04:12.680 END TEST hugepages 00:04:12.680 ************************************ 00:04:12.680 00:04:12.680 real 0m4.933s 00:04:12.680 user 0m2.056s 00:04:12.680 sys 0m2.894s 00:04:12.680 15:46:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:12.680 15:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:12.680 15:46:23 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:12.680 15:46:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.680 15:46:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.680 15:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:12.680 ************************************ 00:04:12.680 START TEST driver 00:04:12.680 ************************************ 00:04:12.680 15:46:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:12.680 * Looking for test storage... 00:04:12.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:12.680 15:46:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:12.680 15:46:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:12.680 15:46:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:12.680 15:46:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:12.680 15:46:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:12.680 15:46:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:12.680 15:46:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:12.680 15:46:24 -- scripts/common.sh@335 -- # IFS=.-: 00:04:12.680 15:46:24 -- scripts/common.sh@335 -- # read -ra ver1 00:04:12.680 15:46:24 -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.680 15:46:24 -- scripts/common.sh@336 -- # read -ra ver2 00:04:12.680 15:46:24 -- scripts/common.sh@337 -- # local 'op=<' 00:04:12.680 15:46:24 -- scripts/common.sh@339 -- # ver1_l=2 00:04:12.680 15:46:24 -- scripts/common.sh@340 -- # ver2_l=1 00:04:12.680 15:46:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:12.680 15:46:24 -- scripts/common.sh@343 -- # case "$op" in 00:04:12.680 15:46:24 -- scripts/common.sh@344 -- # : 1 00:04:12.680 15:46:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:12.680 15:46:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.680 15:46:24 -- scripts/common.sh@364 -- # decimal 1 00:04:12.680 15:46:24 -- scripts/common.sh@352 -- # local d=1 00:04:12.680 15:46:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.680 15:46:24 -- scripts/common.sh@354 -- # echo 1 00:04:12.680 15:46:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:12.680 15:46:24 -- scripts/common.sh@365 -- # decimal 2 00:04:12.680 15:46:24 -- scripts/common.sh@352 -- # local d=2 00:04:12.680 15:46:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.680 15:46:24 -- scripts/common.sh@354 -- # echo 2 00:04:12.680 15:46:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:12.680 15:46:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:12.680 15:46:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:12.680 15:46:24 -- scripts/common.sh@367 -- # return 0 00:04:12.680 15:46:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.680 15:46:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:12.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.680 --rc genhtml_branch_coverage=1 00:04:12.680 --rc genhtml_function_coverage=1 00:04:12.680 --rc genhtml_legend=1 00:04:12.680 --rc geninfo_all_blocks=1 00:04:12.680 --rc geninfo_unexecuted_blocks=1 00:04:12.680 00:04:12.680 ' 00:04:12.680 15:46:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:12.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.680 --rc genhtml_branch_coverage=1 00:04:12.680 --rc genhtml_function_coverage=1 00:04:12.680 --rc genhtml_legend=1 00:04:12.680 --rc geninfo_all_blocks=1 00:04:12.680 --rc geninfo_unexecuted_blocks=1 00:04:12.680 00:04:12.680 ' 00:04:12.680 15:46:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:12.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.680 --rc genhtml_branch_coverage=1 00:04:12.680 --rc genhtml_function_coverage=1 00:04:12.680 --rc genhtml_legend=1 00:04:12.680 --rc geninfo_all_blocks=1 00:04:12.680 --rc geninfo_unexecuted_blocks=1 00:04:12.680 00:04:12.680 ' 00:04:12.680 15:46:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:12.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.680 --rc genhtml_branch_coverage=1 00:04:12.680 --rc genhtml_function_coverage=1 00:04:12.680 --rc genhtml_legend=1 00:04:12.680 --rc geninfo_all_blocks=1 00:04:12.680 --rc geninfo_unexecuted_blocks=1 00:04:12.680 00:04:12.680 ' 00:04:12.680 15:46:24 -- setup/driver.sh@68 -- # setup reset 00:04:12.680 15:46:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.680 15:46:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.282 15:46:29 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:19.282 15:46:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.282 15:46:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.282 15:46:29 -- common/autotest_common.sh@10 -- # set +x 00:04:19.282 ************************************ 00:04:19.282 START TEST guess_driver 00:04:19.282 ************************************ 00:04:19.282 15:46:29 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:19.282 15:46:29 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:19.282 15:46:29 -- setup/driver.sh@47 -- # local fail=0 00:04:19.282 15:46:29 -- setup/driver.sh@49 -- # pick_driver 00:04:19.282 15:46:29 -- setup/driver.sh@36 -- # vfio 00:04:19.282 15:46:29 -- setup/driver.sh@21 -- # local iommu_grups 00:04:19.282 15:46:29 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:19.282 15:46:29 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:19.282 15:46:29 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:19.282 15:46:29 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:19.282 15:46:29 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:19.282 15:46:29 -- setup/driver.sh@32 -- # return 1 00:04:19.282 15:46:29 -- setup/driver.sh@38 -- # uio 00:04:19.282 15:46:29 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:19.282 15:46:29 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:19.282 15:46:29 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:19.282 15:46:29 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:19.282 15:46:29 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:19.282 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:19.282 15:46:29 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:19.282 15:46:29 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:19.282 15:46:29 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:19.282 Looking for driver=uio_pci_generic 00:04:19.282 15:46:29 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:19.282 15:46:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.282 15:46:29 -- setup/driver.sh@45 -- # setup output config 00:04:19.282 15:46:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.282 15:46:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.282 15:46:30 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:19.282 15:46:30 -- setup/driver.sh@58 -- # continue 00:04:19.282 15:46:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.544 15:46:30 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.544 15:46:30 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:19.544 15:46:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.544 15:46:30 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.544 15:46:30 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:19.544 15:46:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.544 15:46:30 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.544 15:46:30 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:19.544 15:46:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.544 15:46:30 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.544 15:46:30 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:19.544 15:46:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.544 15:46:30 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:19.544 15:46:30 -- setup/driver.sh@65 -- # setup reset 00:04:19.544 15:46:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.544 15:46:30 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.141 00:04:26.141 real 0m6.925s 00:04:26.141 user 0m0.659s 00:04:26.141 sys 0m1.212s 00:04:26.141 ************************************ 00:04:26.141 END TEST guess_driver 00:04:26.141 ************************************ 00:04:26.141 15:46:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.141 15:46:36 -- common/autotest_common.sh@10 -- # set +x 00:04:26.141 00:04:26.141 real 0m12.807s 00:04:26.141 user 0m1.006s 00:04:26.141 sys 0m1.835s 00:04:26.141 15:46:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.141 ************************************ 00:04:26.141 END TEST driver 00:04:26.141 ************************************ 00:04:26.142 15:46:36 -- common/autotest_common.sh@10 -- # set +x 00:04:26.142 15:46:36 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:26.142 15:46:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.142 15:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.142 15:46:36 -- common/autotest_common.sh@10 -- # set +x 00:04:26.142 ************************************ 00:04:26.142 START TEST devices 00:04:26.142 ************************************ 00:04:26.142 15:46:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:26.142 * Looking for test storage... 00:04:26.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:26.142 15:46:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:26.142 15:46:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:26.142 15:46:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:26.142 15:46:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:26.142 15:46:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:26.142 15:46:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:26.142 15:46:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:26.142 15:46:36 -- scripts/common.sh@335 -- # IFS=.-: 00:04:26.142 15:46:36 -- scripts/common.sh@335 -- # read -ra ver1 00:04:26.142 15:46:36 -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.142 15:46:36 -- scripts/common.sh@336 -- # read -ra ver2 00:04:26.142 15:46:36 -- scripts/common.sh@337 -- # local 'op=<' 00:04:26.142 15:46:36 -- scripts/common.sh@339 -- # ver1_l=2 00:04:26.142 15:46:36 -- scripts/common.sh@340 -- # ver2_l=1 00:04:26.142 15:46:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:26.142 15:46:36 -- scripts/common.sh@343 -- # case "$op" in 00:04:26.142 15:46:36 -- scripts/common.sh@344 -- # : 1 00:04:26.142 15:46:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:26.142 15:46:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.142 15:46:36 -- scripts/common.sh@364 -- # decimal 1 00:04:26.142 15:46:36 -- scripts/common.sh@352 -- # local d=1 00:04:26.142 15:46:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.142 15:46:36 -- scripts/common.sh@354 -- # echo 1 00:04:26.142 15:46:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:26.142 15:46:36 -- scripts/common.sh@365 -- # decimal 2 00:04:26.142 15:46:36 -- scripts/common.sh@352 -- # local d=2 00:04:26.142 15:46:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.142 15:46:36 -- scripts/common.sh@354 -- # echo 2 00:04:26.142 15:46:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:26.142 15:46:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:26.142 15:46:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:26.142 15:46:36 -- scripts/common.sh@367 -- # return 0 00:04:26.142 15:46:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.142 15:46:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:26.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.142 --rc genhtml_branch_coverage=1 00:04:26.142 --rc genhtml_function_coverage=1 00:04:26.142 --rc genhtml_legend=1 00:04:26.142 --rc geninfo_all_blocks=1 00:04:26.142 --rc geninfo_unexecuted_blocks=1 00:04:26.142 00:04:26.142 ' 00:04:26.142 15:46:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:26.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.142 --rc genhtml_branch_coverage=1 00:04:26.142 --rc genhtml_function_coverage=1 00:04:26.142 --rc genhtml_legend=1 00:04:26.142 --rc geninfo_all_blocks=1 00:04:26.142 --rc geninfo_unexecuted_blocks=1 00:04:26.142 00:04:26.142 ' 00:04:26.142 15:46:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:26.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.142 --rc genhtml_branch_coverage=1 00:04:26.142 --rc genhtml_function_coverage=1 00:04:26.142 --rc genhtml_legend=1 00:04:26.142 --rc geninfo_all_blocks=1 00:04:26.142 --rc geninfo_unexecuted_blocks=1 00:04:26.142 00:04:26.142 ' 00:04:26.142 15:46:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:26.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.142 --rc genhtml_branch_coverage=1 00:04:26.142 --rc genhtml_function_coverage=1 00:04:26.142 --rc genhtml_legend=1 00:04:26.142 --rc geninfo_all_blocks=1 00:04:26.142 --rc geninfo_unexecuted_blocks=1 00:04:26.142 00:04:26.142 ' 00:04:26.142 15:46:36 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:26.142 15:46:36 -- setup/devices.sh@192 -- # setup reset 00:04:26.142 15:46:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.142 15:46:36 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.403 15:46:37 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:26.403 15:46:37 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:26.403 15:46:37 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:26.403 15:46:37 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:26.403 15:46:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.403 15:46:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:04:26.403 15:46:37 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:04:26.403 15:46:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:26.403 15:46:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.403 15:46:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.403 15:46:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:26.403 15:46:37 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:26.403 15:46:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.403 15:46:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.403 15:46:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.403 15:46:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:26.403 15:46:37 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:26.403 15:46:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:26.403 15:46:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.403 15:46:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.404 15:46:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:26.404 15:46:37 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:26.404 15:46:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:26.404 15:46:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.404 15:46:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.404 15:46:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:26.404 15:46:37 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:26.404 15:46:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:26.404 15:46:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.404 15:46:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.404 15:46:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:26.404 15:46:37 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:26.404 15:46:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:26.404 15:46:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.404 15:46:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.404 15:46:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:26.404 15:46:37 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:26.404 15:46:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:26.404 15:46:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.404 15:46:37 -- setup/devices.sh@196 -- # blocks=() 00:04:26.404 15:46:37 -- setup/devices.sh@196 -- # declare -a blocks 00:04:26.404 15:46:37 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:26.404 15:46:37 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:26.404 15:46:37 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:26.404 15:46:37 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.404 15:46:37 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:26.404 15:46:37 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:26.404 15:46:37 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:26.404 15:46:37 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:26.404 15:46:37 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:26.404 15:46:37 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:26.404 15:46:37 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:26.404 No valid GPT data, bailing 00:04:26.665 15:46:37 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:26.665 15:46:37 -- scripts/common.sh@393 -- # pt= 00:04:26.665 15:46:37 -- scripts/common.sh@394 -- # return 1 00:04:26.665 15:46:37 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:26.665 15:46:37 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:26.665 15:46:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:26.665 15:46:37 -- setup/common.sh@80 -- # echo 1073741824 00:04:26.665 15:46:37 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:26.665 15:46:37 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.665 15:46:37 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:26.665 15:46:37 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:26.665 15:46:37 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:26.665 15:46:37 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:26.665 15:46:37 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:26.665 15:46:37 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:26.665 15:46:37 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:26.665 No valid GPT data, bailing 00:04:26.665 15:46:37 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:26.665 15:46:37 -- scripts/common.sh@393 -- # pt= 00:04:26.665 15:46:37 -- scripts/common.sh@394 -- # return 1 00:04:26.665 15:46:37 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:26.665 15:46:37 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:26.665 15:46:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:26.665 15:46:37 -- setup/common.sh@80 -- # echo 4294967296 00:04:26.665 15:46:37 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:26.665 15:46:37 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.665 15:46:37 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:26.665 15:46:37 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.665 15:46:37 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:26.665 15:46:37 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:26.665 15:46:37 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:26.665 15:46:37 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:26.665 15:46:37 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:26.665 15:46:37 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:26.665 15:46:37 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:26.665 No valid GPT data, bailing 00:04:26.665 15:46:37 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:26.665 15:46:38 -- scripts/common.sh@393 -- # pt= 00:04:26.665 15:46:38 -- scripts/common.sh@394 -- # return 1 00:04:26.665 15:46:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:26.665 15:46:38 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:26.665 15:46:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:26.665 15:46:38 -- setup/common.sh@80 -- # echo 4294967296 00:04:26.665 15:46:38 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:26.665 15:46:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.665 15:46:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:26.665 15:46:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.665 15:46:38 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:26.665 15:46:38 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:26.665 15:46:38 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:26.665 15:46:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:26.665 15:46:38 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:26.665 15:46:38 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:26.665 15:46:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:26.665 No valid GPT data, bailing 00:04:26.665 15:46:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:26.665 15:46:38 -- scripts/common.sh@393 -- # pt= 00:04:26.665 15:46:38 -- scripts/common.sh@394 -- # return 1 00:04:26.665 15:46:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:26.665 15:46:38 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:26.665 15:46:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:26.665 15:46:38 -- setup/common.sh@80 -- # echo 4294967296 00:04:26.665 15:46:38 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:26.665 15:46:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.665 15:46:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:26.665 15:46:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.665 15:46:38 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:26.665 15:46:38 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:26.665 15:46:38 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:26.665 15:46:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:26.665 15:46:38 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:26.665 15:46:38 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:26.665 15:46:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:26.925 No valid GPT data, bailing 00:04:26.925 15:46:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:26.926 15:46:38 -- scripts/common.sh@393 -- # pt= 00:04:26.926 15:46:38 -- scripts/common.sh@394 -- # return 1 00:04:26.926 15:46:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:26.926 15:46:38 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:26.926 15:46:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:26.926 15:46:38 -- setup/common.sh@80 -- # echo 6343335936 00:04:26.926 15:46:38 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:26.926 15:46:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.926 15:46:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:26.926 15:46:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.926 15:46:38 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:26.926 15:46:38 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:26.926 15:46:38 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:26.926 15:46:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:26.926 15:46:38 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:26.926 15:46:38 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:26.926 15:46:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:26.926 No valid GPT data, bailing 00:04:26.926 15:46:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:26.926 15:46:38 -- scripts/common.sh@393 -- # pt= 00:04:26.926 15:46:38 -- scripts/common.sh@394 -- # return 1 00:04:26.926 15:46:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:26.926 15:46:38 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:26.926 15:46:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:26.926 15:46:38 -- setup/common.sh@80 -- # echo 5368709120 00:04:26.926 15:46:38 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:26.926 15:46:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.926 15:46:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:26.926 15:46:38 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:26.926 15:46:38 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:26.926 15:46:38 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:26.926 15:46:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.926 15:46:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.926 15:46:38 -- common/autotest_common.sh@10 -- # set +x 00:04:26.926 ************************************ 00:04:26.926 START TEST nvme_mount 00:04:26.926 ************************************ 00:04:26.926 15:46:38 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:26.926 15:46:38 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:26.926 15:46:38 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:26.926 15:46:38 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.926 15:46:38 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.926 15:46:38 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:26.926 15:46:38 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:26.926 15:46:38 -- setup/common.sh@40 -- # local part_no=1 00:04:26.926 15:46:38 -- setup/common.sh@41 -- # local size=1073741824 00:04:26.926 15:46:38 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:26.926 15:46:38 -- setup/common.sh@44 -- # parts=() 00:04:26.926 15:46:38 -- setup/common.sh@44 -- # local parts 00:04:26.926 15:46:38 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:26.926 15:46:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.926 15:46:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.926 15:46:38 -- setup/common.sh@46 -- # (( part++ )) 00:04:26.926 15:46:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.926 15:46:38 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:26.926 15:46:38 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:26.926 15:46:38 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:27.866 Creating new GPT entries in memory. 00:04:27.866 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:27.866 other utilities. 00:04:27.866 15:46:39 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:27.866 15:46:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.866 15:46:39 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.866 15:46:39 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.866 15:46:39 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:29.250 Creating new GPT entries in memory. 00:04:29.250 The operation has completed successfully. 00:04:29.250 15:46:40 -- setup/common.sh@57 -- # (( part++ )) 00:04:29.250 15:46:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.250 15:46:40 -- setup/common.sh@62 -- # wait 53701 00:04:29.250 15:46:40 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.250 15:46:40 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:29.250 15:46:40 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.250 15:46:40 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:29.250 15:46:40 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:29.250 15:46:40 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.250 15:46:40 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:29.250 15:46:40 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:29.250 15:46:40 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:29.250 15:46:40 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.250 15:46:40 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:29.250 15:46:40 -- setup/devices.sh@53 -- # local found=0 00:04:29.250 15:46:40 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.250 15:46:40 -- setup/devices.sh@56 -- # : 00:04:29.250 15:46:40 -- setup/devices.sh@59 -- # local pci status 00:04:29.250 15:46:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.250 15:46:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:29.250 15:46:40 -- setup/devices.sh@47 -- # setup output config 00:04:29.250 15:46:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.250 15:46:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.250 15:46:40 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.250 15:46:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.512 15:46:40 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.512 15:46:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.512 15:46:40 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.512 15:46:40 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:29.512 15:46:40 -- setup/devices.sh@63 -- # found=1 00:04:29.512 15:46:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.512 15:46:40 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.512 15:46:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.772 15:46:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.772 15:46:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.772 15:46:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.772 15:46:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.033 15:46:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.033 15:46:41 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:30.033 15:46:41 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:30.033 15:46:41 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.033 15:46:41 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:30.033 15:46:41 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:30.033 15:46:41 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:30.033 15:46:41 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:30.033 15:46:41 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:30.033 15:46:41 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:30.033 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.033 15:46:41 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:30.033 15:46:41 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:30.292 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:30.292 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:30.292 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:30.292 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:30.292 15:46:41 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:30.292 15:46:41 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:30.292 15:46:41 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:30.292 15:46:41 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:30.292 15:46:41 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:30.292 15:46:41 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:30.292 15:46:41 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:30.292 15:46:41 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:30.292 15:46:41 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:30.292 15:46:41 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:30.292 15:46:41 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:30.292 15:46:41 -- setup/devices.sh@53 -- # local found=0 00:04:30.293 15:46:41 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.293 15:46:41 -- setup/devices.sh@56 -- # : 00:04:30.293 15:46:41 -- setup/devices.sh@59 -- # local pci status 00:04:30.293 15:46:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.293 15:46:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:30.293 15:46:41 -- setup/devices.sh@47 -- # setup output config 00:04:30.293 15:46:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.293 15:46:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:30.551 15:46:41 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.551 15:46:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.551 15:46:41 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.551 15:46:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.809 15:46:42 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.809 15:46:42 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:30.809 15:46:42 -- setup/devices.sh@63 -- # found=1 00:04:30.809 15:46:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.809 15:46:42 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.809 15:46:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.809 15:46:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.809 15:46:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.067 15:46:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.067 15:46:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.067 15:46:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.067 15:46:42 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:31.067 15:46:42 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.067 15:46:42 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.067 15:46:42 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.067 15:46:42 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.067 15:46:42 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:31.067 15:46:42 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:31.067 15:46:42 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:31.067 15:46:42 -- setup/devices.sh@50 -- # local mount_point= 00:04:31.067 15:46:42 -- setup/devices.sh@51 -- # local test_file= 00:04:31.067 15:46:42 -- setup/devices.sh@53 -- # local found=0 00:04:31.067 15:46:42 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:31.067 15:46:42 -- setup/devices.sh@59 -- # local pci status 00:04:31.067 15:46:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.067 15:46:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:31.067 15:46:42 -- setup/devices.sh@47 -- # setup output config 00:04:31.067 15:46:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.067 15:46:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.067 15:46:42 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.067 15:46:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.325 15:46:42 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.325 15:46:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.585 15:46:42 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.585 15:46:42 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:31.585 15:46:42 -- setup/devices.sh@63 -- # found=1 00:04:31.585 15:46:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.585 15:46:42 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.585 15:46:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.585 15:46:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.585 15:46:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.585 15:46:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.585 15:46:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.843 15:46:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.843 15:46:43 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:31.843 15:46:43 -- setup/devices.sh@68 -- # return 0 00:04:31.843 15:46:43 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:31.843 15:46:43 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.843 15:46:43 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:31.843 15:46:43 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:31.843 15:46:43 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:31.843 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.843 00:04:31.843 real 0m4.827s 00:04:31.843 user 0m0.971s 00:04:31.843 sys 0m1.282s 00:04:31.843 15:46:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:31.843 ************************************ 00:04:31.843 END TEST nvme_mount 00:04:31.843 ************************************ 00:04:31.843 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:04:31.843 15:46:43 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:31.843 15:46:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:31.843 15:46:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:31.843 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:04:31.843 ************************************ 00:04:31.843 START TEST dm_mount 00:04:31.843 ************************************ 00:04:31.843 15:46:43 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:31.843 15:46:43 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:31.843 15:46:43 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:31.843 15:46:43 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:31.843 15:46:43 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:31.843 15:46:43 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:31.843 15:46:43 -- setup/common.sh@40 -- # local part_no=2 00:04:31.843 15:46:43 -- setup/common.sh@41 -- # local size=1073741824 00:04:31.843 15:46:43 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:31.843 15:46:43 -- setup/common.sh@44 -- # parts=() 00:04:31.843 15:46:43 -- setup/common.sh@44 -- # local parts 00:04:31.844 15:46:43 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:31.844 15:46:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.844 15:46:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.844 15:46:43 -- setup/common.sh@46 -- # (( part++ )) 00:04:31.844 15:46:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.844 15:46:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.844 15:46:43 -- setup/common.sh@46 -- # (( part++ )) 00:04:31.844 15:46:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.844 15:46:43 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:31.844 15:46:43 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:31.844 15:46:43 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:32.776 Creating new GPT entries in memory. 00:04:32.776 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:32.776 other utilities. 00:04:32.776 15:46:44 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:32.776 15:46:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.776 15:46:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:32.776 15:46:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:32.776 15:46:44 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:34.151 Creating new GPT entries in memory. 00:04:34.151 The operation has completed successfully. 00:04:34.151 15:46:45 -- setup/common.sh@57 -- # (( part++ )) 00:04:34.151 15:46:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.151 15:46:45 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.151 15:46:45 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.151 15:46:45 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:35.098 The operation has completed successfully. 00:04:35.098 15:46:46 -- setup/common.sh@57 -- # (( part++ )) 00:04:35.099 15:46:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.099 15:46:46 -- setup/common.sh@62 -- # wait 54323 00:04:35.099 15:46:46 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:35.099 15:46:46 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:35.099 15:46:46 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:35.099 15:46:46 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:35.099 15:46:46 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:35.099 15:46:46 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.099 15:46:46 -- setup/devices.sh@161 -- # break 00:04:35.099 15:46:46 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.099 15:46:46 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:35.099 15:46:46 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:35.099 15:46:46 -- setup/devices.sh@166 -- # dm=dm-0 00:04:35.099 15:46:46 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:35.099 15:46:46 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:35.099 15:46:46 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:35.099 15:46:46 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:35.099 15:46:46 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:35.099 15:46:46 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.099 15:46:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:35.099 15:46:46 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:35.099 15:46:46 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:35.099 15:46:46 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:35.099 15:46:46 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:35.099 15:46:46 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:35.099 15:46:46 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:35.099 15:46:46 -- setup/devices.sh@53 -- # local found=0 00:04:35.099 15:46:46 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:35.099 15:46:46 -- setup/devices.sh@56 -- # : 00:04:35.099 15:46:46 -- setup/devices.sh@59 -- # local pci status 00:04:35.099 15:46:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.099 15:46:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:35.099 15:46:46 -- setup/devices.sh@47 -- # setup output config 00:04:35.099 15:46:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.099 15:46:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:35.099 15:46:46 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.099 15:46:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.357 15:46:46 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.357 15:46:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.357 15:46:46 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.357 15:46:46 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:35.357 15:46:46 -- setup/devices.sh@63 -- # found=1 00:04:35.357 15:46:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.357 15:46:46 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.357 15:46:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.615 15:46:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.615 15:46:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.615 15:46:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.615 15:46:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.615 15:46:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.615 15:46:46 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:35.615 15:46:46 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:35.615 15:46:46 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:35.615 15:46:46 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:35.615 15:46:46 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:35.615 15:46:46 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:35.615 15:46:46 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:35.615 15:46:46 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:35.615 15:46:46 -- setup/devices.sh@50 -- # local mount_point= 00:04:35.615 15:46:46 -- setup/devices.sh@51 -- # local test_file= 00:04:35.615 15:46:46 -- setup/devices.sh@53 -- # local found=0 00:04:35.615 15:46:46 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:35.615 15:46:46 -- setup/devices.sh@59 -- # local pci status 00:04:35.615 15:46:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.615 15:46:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:35.615 15:46:46 -- setup/devices.sh@47 -- # setup output config 00:04:35.615 15:46:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.615 15:46:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:35.873 15:46:47 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.873 15:46:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.873 15:46:47 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.873 15:46:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.131 15:46:47 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:36.131 15:46:47 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:36.131 15:46:47 -- setup/devices.sh@63 -- # found=1 00:04:36.131 15:46:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.131 15:46:47 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:36.131 15:46:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.131 15:46:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:36.131 15:46:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.389 15:46:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:36.389 15:46:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.389 15:46:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.389 15:46:47 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:36.389 15:46:47 -- setup/devices.sh@68 -- # return 0 00:04:36.389 15:46:47 -- setup/devices.sh@187 -- # cleanup_dm 00:04:36.389 15:46:47 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.389 15:46:47 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:36.389 15:46:47 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:36.389 15:46:47 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:36.389 15:46:47 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:36.389 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:36.389 15:46:47 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:36.389 15:46:47 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:36.389 00:04:36.389 real 0m4.559s 00:04:36.389 user 0m0.614s 00:04:36.389 sys 0m0.844s 00:04:36.389 15:46:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:36.389 ************************************ 00:04:36.389 15:46:47 -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 END TEST dm_mount 00:04:36.389 ************************************ 00:04:36.389 15:46:47 -- setup/devices.sh@1 -- # cleanup 00:04:36.389 15:46:47 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:36.389 15:46:47 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.389 15:46:47 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:36.389 15:46:47 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:36.389 15:46:47 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:36.389 15:46:47 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:36.647 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:36.647 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:36.647 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:36.647 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:36.647 15:46:47 -- setup/devices.sh@12 -- # cleanup_dm 00:04:36.647 15:46:47 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.647 15:46:47 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:36.647 15:46:47 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:36.647 15:46:47 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:36.647 15:46:47 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:36.647 15:46:47 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:36.647 ************************************ 00:04:36.647 END TEST devices 00:04:36.647 ************************************ 00:04:36.647 00:04:36.647 real 0m11.256s 00:04:36.647 user 0m2.335s 00:04:36.647 sys 0m2.817s 00:04:36.647 15:46:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:36.647 15:46:48 -- common/autotest_common.sh@10 -- # set +x 00:04:36.647 00:04:36.647 real 0m40.359s 00:04:36.647 user 0m7.836s 00:04:36.647 sys 0m11.003s 00:04:36.647 15:46:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:36.647 15:46:48 -- common/autotest_common.sh@10 -- # set +x 00:04:36.647 ************************************ 00:04:36.647 END TEST setup.sh 00:04:36.647 ************************************ 00:04:36.647 15:46:48 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:36.904 Hugepages 00:04:36.904 node hugesize free / total 00:04:36.904 node0 1048576kB 0 / 0 00:04:36.904 node0 2048kB 2048 / 2048 00:04:36.904 00:04:36.904 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:36.904 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:36.904 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:37.162 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:37.162 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:37.162 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:37.162 15:46:48 -- spdk/autotest.sh@128 -- # uname -s 00:04:37.162 15:46:48 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:37.162 15:46:48 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:37.162 15:46:48 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:38.096 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.096 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.096 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.096 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.096 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.355 15:46:49 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:39.289 15:46:50 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:39.289 15:46:50 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:39.289 15:46:50 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:39.289 15:46:50 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:39.289 15:46:50 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:39.289 15:46:50 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:39.289 15:46:50 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:39.289 15:46:50 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:39.289 15:46:50 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:39.289 15:46:50 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:39.289 15:46:50 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:39.289 15:46:50 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.854 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.854 Waiting for block devices as requested 00:04:39.854 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:39.854 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:39.854 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:40.113 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:45.376 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:04:45.376 15:46:56 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:45.376 15:46:56 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:45.376 15:46:56 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:45.376 15:46:56 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:45.376 15:46:56 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:45.376 15:46:56 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:45.377 15:46:56 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:04:45.377 15:46:56 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:04:45.377 15:46:56 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:45.377 15:46:56 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:45.377 15:46:56 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:45.377 15:46:56 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1552 -- # continue 00:04:45.377 15:46:56 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:45.377 15:46:56 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:45.377 15:46:56 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:45.377 15:46:56 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:45.377 15:46:56 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:45.377 15:46:56 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:45.377 15:46:56 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:04:45.377 15:46:56 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:04:45.377 15:46:56 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:45.377 15:46:56 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:45.377 15:46:56 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:45.377 15:46:56 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1552 -- # continue 00:04:45.377 15:46:56 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:45.377 15:46:56 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:04:45.377 15:46:56 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:04:45.377 15:46:56 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:45.377 15:46:56 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:45.377 15:46:56 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:45.377 15:46:56 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:45.377 15:46:56 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:45.377 15:46:56 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:45.377 15:46:56 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:45.377 15:46:56 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:45.377 15:46:56 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1552 -- # continue 00:04:45.377 15:46:56 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:45.377 15:46:56 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:04:45.377 15:46:56 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:45.377 15:46:56 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:04:45.377 15:46:56 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:45.377 15:46:56 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:45.377 15:46:56 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:45.377 15:46:56 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:45.377 15:46:56 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:45.377 15:46:56 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:45.377 15:46:56 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:45.377 15:46:56 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:45.377 15:46:56 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:45.377 15:46:56 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:45.377 15:46:56 -- common/autotest_common.sh@1552 -- # continue 00:04:45.377 15:46:56 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:45.377 15:46:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.377 15:46:56 -- common/autotest_common.sh@10 -- # set +x 00:04:45.377 15:46:56 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:45.377 15:46:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.377 15:46:56 -- common/autotest_common.sh@10 -- # set +x 00:04:45.377 15:46:56 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.210 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.210 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.210 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.210 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.210 15:46:57 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:46.210 15:46:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:46.210 15:46:57 -- common/autotest_common.sh@10 -- # set +x 00:04:46.518 15:46:57 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:46.518 15:46:57 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:46.518 15:46:57 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:46.518 15:46:57 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:46.518 15:46:57 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:46.518 15:46:57 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:46.518 15:46:57 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:46.518 15:46:57 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:46.518 15:46:57 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:46.518 15:46:57 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:46.518 15:46:57 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:46.518 15:46:57 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:46.518 15:46:57 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:46.518 15:46:57 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:46.518 15:46:57 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:46.518 15:46:57 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:46.518 15:46:57 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.518 15:46:57 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:46.518 15:46:57 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:46.518 15:46:57 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:46.518 15:46:57 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.518 15:46:57 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:46.518 15:46:57 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:04:46.518 15:46:57 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:46.518 15:46:57 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.518 15:46:57 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:46.518 15:46:57 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:04:46.518 15:46:57 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:46.518 15:46:57 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.518 15:46:57 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:46.518 15:46:57 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:46.518 15:46:57 -- common/autotest_common.sh@1588 -- # return 0 00:04:46.518 15:46:57 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:46.518 15:46:57 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:46.518 15:46:57 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:46.518 15:46:57 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:46.518 15:46:57 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:46.518 15:46:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:46.518 15:46:57 -- common/autotest_common.sh@10 -- # set +x 00:04:46.518 15:46:57 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.518 15:46:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.518 15:46:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.518 15:46:57 -- common/autotest_common.sh@10 -- # set +x 00:04:46.518 ************************************ 00:04:46.518 START TEST env 00:04:46.518 ************************************ 00:04:46.518 15:46:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.518 * Looking for test storage... 00:04:46.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:46.518 15:46:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:46.518 15:46:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:46.518 15:46:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:46.518 15:46:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:46.518 15:46:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:46.518 15:46:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:46.518 15:46:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:46.518 15:46:57 -- scripts/common.sh@335 -- # IFS=.-: 00:04:46.518 15:46:57 -- scripts/common.sh@335 -- # read -ra ver1 00:04:46.518 15:46:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.518 15:46:57 -- scripts/common.sh@336 -- # read -ra ver2 00:04:46.518 15:46:57 -- scripts/common.sh@337 -- # local 'op=<' 00:04:46.518 15:46:57 -- scripts/common.sh@339 -- # ver1_l=2 00:04:46.518 15:46:57 -- scripts/common.sh@340 -- # ver2_l=1 00:04:46.518 15:46:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:46.518 15:46:57 -- scripts/common.sh@343 -- # case "$op" in 00:04:46.518 15:46:57 -- scripts/common.sh@344 -- # : 1 00:04:46.518 15:46:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:46.518 15:46:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.518 15:46:57 -- scripts/common.sh@364 -- # decimal 1 00:04:46.518 15:46:57 -- scripts/common.sh@352 -- # local d=1 00:04:46.518 15:46:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.518 15:46:57 -- scripts/common.sh@354 -- # echo 1 00:04:46.518 15:46:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:46.518 15:46:57 -- scripts/common.sh@365 -- # decimal 2 00:04:46.518 15:46:57 -- scripts/common.sh@352 -- # local d=2 00:04:46.518 15:46:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.518 15:46:57 -- scripts/common.sh@354 -- # echo 2 00:04:46.518 15:46:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:46.518 15:46:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:46.518 15:46:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:46.518 15:46:57 -- scripts/common.sh@367 -- # return 0 00:04:46.518 15:46:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.518 15:46:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:46.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.518 --rc genhtml_branch_coverage=1 00:04:46.518 --rc genhtml_function_coverage=1 00:04:46.518 --rc genhtml_legend=1 00:04:46.518 --rc geninfo_all_blocks=1 00:04:46.518 --rc geninfo_unexecuted_blocks=1 00:04:46.518 00:04:46.518 ' 00:04:46.518 15:46:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:46.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.518 --rc genhtml_branch_coverage=1 00:04:46.518 --rc genhtml_function_coverage=1 00:04:46.518 --rc genhtml_legend=1 00:04:46.518 --rc geninfo_all_blocks=1 00:04:46.518 --rc geninfo_unexecuted_blocks=1 00:04:46.518 00:04:46.518 ' 00:04:46.518 15:46:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:46.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.518 --rc genhtml_branch_coverage=1 00:04:46.518 --rc genhtml_function_coverage=1 00:04:46.518 --rc genhtml_legend=1 00:04:46.518 --rc geninfo_all_blocks=1 00:04:46.518 --rc geninfo_unexecuted_blocks=1 00:04:46.518 00:04:46.518 ' 00:04:46.518 15:46:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:46.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.518 --rc genhtml_branch_coverage=1 00:04:46.518 --rc genhtml_function_coverage=1 00:04:46.518 --rc genhtml_legend=1 00:04:46.518 --rc geninfo_all_blocks=1 00:04:46.518 --rc geninfo_unexecuted_blocks=1 00:04:46.518 00:04:46.518 ' 00:04:46.518 15:46:57 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.518 15:46:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.518 15:46:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.518 15:46:57 -- common/autotest_common.sh@10 -- # set +x 00:04:46.518 ************************************ 00:04:46.518 START TEST env_memory 00:04:46.519 ************************************ 00:04:46.519 15:46:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.519 00:04:46.519 00:04:46.519 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.519 http://cunit.sourceforge.net/ 00:04:46.519 00:04:46.519 00:04:46.519 Suite: memory 00:04:46.779 Test: alloc and free memory map ...[2024-11-29 15:46:57.968405] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:46.779 passed 00:04:46.779 Test: mem map translation ...[2024-11-29 15:46:58.007128] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:46.779 [2024-11-29 15:46:58.007169] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:46.779 [2024-11-29 15:46:58.007227] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:46.779 [2024-11-29 15:46:58.007241] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:46.779 passed 00:04:46.779 Test: mem map registration ...[2024-11-29 15:46:58.075477] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:46.779 [2024-11-29 15:46:58.075515] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:46.779 passed 00:04:46.779 Test: mem map adjacent registrations ...passed 00:04:46.779 00:04:46.779 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.779 suites 1 1 n/a 0 0 00:04:46.779 tests 4 4 4 0 0 00:04:46.779 asserts 152 152 152 0 n/a 00:04:46.779 00:04:46.779 Elapsed time = 0.233 seconds 00:04:46.779 00:04:46.779 real 0m0.269s 00:04:46.779 user 0m0.249s 00:04:46.779 sys 0m0.012s 00:04:46.779 15:46:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.779 15:46:58 -- common/autotest_common.sh@10 -- # set +x 00:04:46.779 ************************************ 00:04:46.779 END TEST env_memory 00:04:46.779 ************************************ 00:04:47.041 15:46:58 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:47.041 15:46:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.041 15:46:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.041 15:46:58 -- common/autotest_common.sh@10 -- # set +x 00:04:47.041 ************************************ 00:04:47.041 START TEST env_vtophys 00:04:47.041 ************************************ 00:04:47.041 15:46:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:47.041 EAL: lib.eal log level changed from notice to debug 00:04:47.041 EAL: Detected lcore 0 as core 0 on socket 0 00:04:47.041 EAL: Detected lcore 1 as core 0 on socket 0 00:04:47.041 EAL: Detected lcore 2 as core 0 on socket 0 00:04:47.041 EAL: Detected lcore 3 as core 0 on socket 0 00:04:47.041 EAL: Detected lcore 4 as core 0 on socket 0 00:04:47.041 EAL: Detected lcore 5 as core 0 on socket 0 00:04:47.041 EAL: Detected lcore 6 as core 0 on socket 0 00:04:47.041 EAL: Detected lcore 7 as core 0 on socket 0 00:04:47.041 EAL: Detected lcore 8 as core 0 on socket 0 00:04:47.041 EAL: Detected lcore 9 as core 0 on socket 0 00:04:47.041 EAL: Maximum logical cores by configuration: 128 00:04:47.041 EAL: Detected CPU lcores: 10 00:04:47.041 EAL: Detected NUMA nodes: 1 00:04:47.041 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:47.041 EAL: Detected shared linkage of DPDK 00:04:47.041 EAL: No shared files mode enabled, IPC will be disabled 00:04:47.041 EAL: Selected IOVA mode 'PA' 00:04:47.041 EAL: Probing VFIO support... 00:04:47.041 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:47.041 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:47.041 EAL: Ask a virtual area of 0x2e000 bytes 00:04:47.041 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:47.041 EAL: Setting up physically contiguous memory... 00:04:47.041 EAL: Setting maximum number of open files to 524288 00:04:47.041 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:47.041 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:47.041 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.041 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:47.041 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.041 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.041 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:47.041 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:47.041 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.041 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:47.041 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.041 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.041 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:47.041 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:47.041 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.041 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:47.041 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.041 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.041 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:47.041 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:47.041 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.041 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:47.041 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.041 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.041 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:47.041 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:47.041 EAL: Hugepages will be freed exactly as allocated. 00:04:47.041 EAL: No shared files mode enabled, IPC is disabled 00:04:47.041 EAL: No shared files mode enabled, IPC is disabled 00:04:47.041 EAL: TSC frequency is ~2600000 KHz 00:04:47.041 EAL: Main lcore 0 is ready (tid=7efe85ecfa40;cpuset=[0]) 00:04:47.041 EAL: Trying to obtain current memory policy. 00:04:47.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.041 EAL: Restoring previous memory policy: 0 00:04:47.041 EAL: request: mp_malloc_sync 00:04:47.041 EAL: No shared files mode enabled, IPC is disabled 00:04:47.041 EAL: Heap on socket 0 was expanded by 2MB 00:04:47.041 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:47.041 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:47.041 EAL: Mem event callback 'spdk:(nil)' registered 00:04:47.041 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:47.041 00:04:47.041 00:04:47.041 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.041 http://cunit.sourceforge.net/ 00:04:47.041 00:04:47.041 00:04:47.041 Suite: components_suite 00:04:47.613 Test: vtophys_malloc_test ...passed 00:04:47.613 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:47.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.613 EAL: Restoring previous memory policy: 4 00:04:47.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.613 EAL: request: mp_malloc_sync 00:04:47.613 EAL: No shared files mode enabled, IPC is disabled 00:04:47.613 EAL: Heap on socket 0 was expanded by 4MB 00:04:47.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.613 EAL: request: mp_malloc_sync 00:04:47.613 EAL: No shared files mode enabled, IPC is disabled 00:04:47.613 EAL: Heap on socket 0 was shrunk by 4MB 00:04:47.613 EAL: Trying to obtain current memory policy. 00:04:47.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.613 EAL: Restoring previous memory policy: 4 00:04:47.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.613 EAL: request: mp_malloc_sync 00:04:47.613 EAL: No shared files mode enabled, IPC is disabled 00:04:47.613 EAL: Heap on socket 0 was expanded by 6MB 00:04:47.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.613 EAL: request: mp_malloc_sync 00:04:47.613 EAL: No shared files mode enabled, IPC is disabled 00:04:47.613 EAL: Heap on socket 0 was shrunk by 6MB 00:04:47.613 EAL: Trying to obtain current memory policy. 00:04:47.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.613 EAL: Restoring previous memory policy: 4 00:04:47.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.613 EAL: request: mp_malloc_sync 00:04:47.613 EAL: No shared files mode enabled, IPC is disabled 00:04:47.613 EAL: Heap on socket 0 was expanded by 10MB 00:04:47.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.613 EAL: request: mp_malloc_sync 00:04:47.613 EAL: No shared files mode enabled, IPC is disabled 00:04:47.613 EAL: Heap on socket 0 was shrunk by 10MB 00:04:47.613 EAL: Trying to obtain current memory policy. 00:04:47.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.613 EAL: Restoring previous memory policy: 4 00:04:47.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.613 EAL: request: mp_malloc_sync 00:04:47.613 EAL: No shared files mode enabled, IPC is disabled 00:04:47.613 EAL: Heap on socket 0 was expanded by 18MB 00:04:47.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.613 EAL: request: mp_malloc_sync 00:04:47.613 EAL: No shared files mode enabled, IPC is disabled 00:04:47.613 EAL: Heap on socket 0 was shrunk by 18MB 00:04:47.613 EAL: Trying to obtain current memory policy. 00:04:47.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.613 EAL: Restoring previous memory policy: 4 00:04:47.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.613 EAL: request: mp_malloc_sync 00:04:47.613 EAL: No shared files mode enabled, IPC is disabled 00:04:47.613 EAL: Heap on socket 0 was expanded by 34MB 00:04:47.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.613 EAL: request: mp_malloc_sync 00:04:47.613 EAL: No shared files mode enabled, IPC is disabled 00:04:47.613 EAL: Heap on socket 0 was shrunk by 34MB 00:04:47.613 EAL: Trying to obtain current memory policy. 00:04:47.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.613 EAL: Restoring previous memory policy: 4 00:04:47.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.613 EAL: request: mp_malloc_sync 00:04:47.613 EAL: No shared files mode enabled, IPC is disabled 00:04:47.613 EAL: Heap on socket 0 was expanded by 66MB 00:04:47.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.613 EAL: request: mp_malloc_sync 00:04:47.613 EAL: No shared files mode enabled, IPC is disabled 00:04:47.613 EAL: Heap on socket 0 was shrunk by 66MB 00:04:47.875 EAL: Trying to obtain current memory policy. 00:04:47.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.875 EAL: Restoring previous memory policy: 4 00:04:47.875 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.875 EAL: request: mp_malloc_sync 00:04:47.875 EAL: No shared files mode enabled, IPC is disabled 00:04:47.875 EAL: Heap on socket 0 was expanded by 130MB 00:04:47.875 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.875 EAL: request: mp_malloc_sync 00:04:47.875 EAL: No shared files mode enabled, IPC is disabled 00:04:47.875 EAL: Heap on socket 0 was shrunk by 130MB 00:04:48.136 EAL: Trying to obtain current memory policy. 00:04:48.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.136 EAL: Restoring previous memory policy: 4 00:04:48.136 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.136 EAL: request: mp_malloc_sync 00:04:48.136 EAL: No shared files mode enabled, IPC is disabled 00:04:48.136 EAL: Heap on socket 0 was expanded by 258MB 00:04:48.398 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.398 EAL: request: mp_malloc_sync 00:04:48.398 EAL: No shared files mode enabled, IPC is disabled 00:04:48.398 EAL: Heap on socket 0 was shrunk by 258MB 00:04:48.658 EAL: Trying to obtain current memory policy. 00:04:48.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.919 EAL: Restoring previous memory policy: 4 00:04:48.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.919 EAL: request: mp_malloc_sync 00:04:48.919 EAL: No shared files mode enabled, IPC is disabled 00:04:48.919 EAL: Heap on socket 0 was expanded by 514MB 00:04:49.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.491 EAL: request: mp_malloc_sync 00:04:49.491 EAL: No shared files mode enabled, IPC is disabled 00:04:49.491 EAL: Heap on socket 0 was shrunk by 514MB 00:04:50.063 EAL: Trying to obtain current memory policy. 00:04:50.063 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.322 EAL: Restoring previous memory policy: 4 00:04:50.322 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.322 EAL: request: mp_malloc_sync 00:04:50.322 EAL: No shared files mode enabled, IPC is disabled 00:04:50.322 EAL: Heap on socket 0 was expanded by 1026MB 00:04:51.257 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.257 EAL: request: mp_malloc_sync 00:04:51.257 EAL: No shared files mode enabled, IPC is disabled 00:04:51.257 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:52.195 passed 00:04:52.195 00:04:52.195 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.195 suites 1 1 n/a 0 0 00:04:52.195 tests 2 2 2 0 0 00:04:52.195 asserts 5327 5327 5327 0 n/a 00:04:52.195 00:04:52.195 Elapsed time = 4.925 seconds 00:04:52.195 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.195 EAL: request: mp_malloc_sync 00:04:52.195 EAL: No shared files mode enabled, IPC is disabled 00:04:52.195 EAL: Heap on socket 0 was shrunk by 2MB 00:04:52.195 EAL: No shared files mode enabled, IPC is disabled 00:04:52.195 EAL: No shared files mode enabled, IPC is disabled 00:04:52.195 EAL: No shared files mode enabled, IPC is disabled 00:04:52.195 00:04:52.195 real 0m5.176s 00:04:52.195 user 0m4.207s 00:04:52.195 sys 0m0.815s 00:04:52.195 15:47:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.195 15:47:03 -- common/autotest_common.sh@10 -- # set +x 00:04:52.195 ************************************ 00:04:52.195 END TEST env_vtophys 00:04:52.195 ************************************ 00:04:52.195 15:47:03 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:52.195 15:47:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.195 15:47:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.195 15:47:03 -- common/autotest_common.sh@10 -- # set +x 00:04:52.195 ************************************ 00:04:52.195 START TEST env_pci 00:04:52.195 ************************************ 00:04:52.195 15:47:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:52.195 00:04:52.195 00:04:52.195 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.195 http://cunit.sourceforge.net/ 00:04:52.195 00:04:52.195 00:04:52.195 Suite: pci 00:04:52.195 Test: pci_hook ...[2024-11-29 15:47:03.481770] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56028 has claimed it 00:04:52.195 passed 00:04:52.195 00:04:52.195 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.195 suites 1 1 n/a 0 0 00:04:52.195 tests 1 1 1 0 0 00:04:52.195 asserts 25 25 25 0 n/a 00:04:52.195 00:04:52.195 Elapsed time = 0.007 seconds 00:04:52.195 EAL: Cannot find device (10000:00:01.0) 00:04:52.195 EAL: Failed to attach device on primary process 00:04:52.195 00:04:52.195 real 0m0.063s 00:04:52.195 user 0m0.023s 00:04:52.195 sys 0m0.040s 00:04:52.195 15:47:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.195 ************************************ 00:04:52.195 END TEST env_pci 00:04:52.195 ************************************ 00:04:52.195 15:47:03 -- common/autotest_common.sh@10 -- # set +x 00:04:52.195 15:47:03 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:52.195 15:47:03 -- env/env.sh@15 -- # uname 00:04:52.195 15:47:03 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:52.195 15:47:03 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:52.195 15:47:03 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:52.195 15:47:03 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:52.195 15:47:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.195 15:47:03 -- common/autotest_common.sh@10 -- # set +x 00:04:52.195 ************************************ 00:04:52.195 START TEST env_dpdk_post_init 00:04:52.195 ************************************ 00:04:52.195 15:47:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:52.195 EAL: Detected CPU lcores: 10 00:04:52.195 EAL: Detected NUMA nodes: 1 00:04:52.195 EAL: Detected shared linkage of DPDK 00:04:52.457 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:52.457 EAL: Selected IOVA mode 'PA' 00:04:52.457 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:52.457 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:52.457 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:52.457 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:04:52.457 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:04:52.457 Starting DPDK initialization... 00:04:52.457 Starting SPDK post initialization... 00:04:52.457 SPDK NVMe probe 00:04:52.457 Attaching to 0000:00:06.0 00:04:52.457 Attaching to 0000:00:07.0 00:04:52.457 Attaching to 0000:00:08.0 00:04:52.457 Attaching to 0000:00:09.0 00:04:52.457 Attached to 0000:00:06.0 00:04:52.457 Attached to 0000:00:07.0 00:04:52.457 Attached to 0000:00:09.0 00:04:52.457 Attached to 0000:00:08.0 00:04:52.457 Cleaning up... 00:04:52.457 00:04:52.457 real 0m0.223s 00:04:52.457 user 0m0.058s 00:04:52.457 sys 0m0.067s 00:04:52.457 15:47:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.457 ************************************ 00:04:52.457 END TEST env_dpdk_post_init 00:04:52.457 ************************************ 00:04:52.457 15:47:03 -- common/autotest_common.sh@10 -- # set +x 00:04:52.457 15:47:03 -- env/env.sh@26 -- # uname 00:04:52.457 15:47:03 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:52.457 15:47:03 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:52.457 15:47:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.457 15:47:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.457 15:47:03 -- common/autotest_common.sh@10 -- # set +x 00:04:52.457 ************************************ 00:04:52.457 START TEST env_mem_callbacks 00:04:52.457 ************************************ 00:04:52.457 15:47:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:52.718 EAL: Detected CPU lcores: 10 00:04:52.718 EAL: Detected NUMA nodes: 1 00:04:52.718 EAL: Detected shared linkage of DPDK 00:04:52.718 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:52.718 EAL: Selected IOVA mode 'PA' 00:04:52.718 00:04:52.718 00:04:52.718 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.718 http://cunit.sourceforge.net/ 00:04:52.718 00:04:52.718 00:04:52.718 Suite: memory 00:04:52.718 Test: test ... 00:04:52.718 register 0x200000200000 2097152 00:04:52.718 malloc 3145728 00:04:52.718 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:52.718 register 0x200000400000 4194304 00:04:52.718 buf 0x2000004fffc0 len 3145728 PASSED 00:04:52.718 malloc 64 00:04:52.718 buf 0x2000004ffec0 len 64 PASSED 00:04:52.718 malloc 4194304 00:04:52.718 register 0x200000800000 6291456 00:04:52.718 buf 0x2000009fffc0 len 4194304 PASSED 00:04:52.718 free 0x2000004fffc0 3145728 00:04:52.718 free 0x2000004ffec0 64 00:04:52.718 unregister 0x200000400000 4194304 PASSED 00:04:52.718 free 0x2000009fffc0 4194304 00:04:52.718 unregister 0x200000800000 6291456 PASSED 00:04:52.718 malloc 8388608 00:04:52.718 register 0x200000400000 10485760 00:04:52.718 buf 0x2000005fffc0 len 8388608 PASSED 00:04:52.718 free 0x2000005fffc0 8388608 00:04:52.718 unregister 0x200000400000 10485760 PASSED 00:04:52.718 passed 00:04:52.718 00:04:52.718 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.718 suites 1 1 n/a 0 0 00:04:52.718 tests 1 1 1 0 0 00:04:52.718 asserts 15 15 15 0 n/a 00:04:52.718 00:04:52.718 Elapsed time = 0.038 seconds 00:04:52.718 00:04:52.718 real 0m0.211s 00:04:52.718 user 0m0.060s 00:04:52.718 sys 0m0.049s 00:04:52.718 15:47:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.718 ************************************ 00:04:52.718 END TEST env_mem_callbacks 00:04:52.718 ************************************ 00:04:52.718 15:47:04 -- common/autotest_common.sh@10 -- # set +x 00:04:52.718 00:04:52.718 real 0m6.341s 00:04:52.718 user 0m4.757s 00:04:52.718 sys 0m1.178s 00:04:52.718 15:47:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.718 15:47:04 -- common/autotest_common.sh@10 -- # set +x 00:04:52.718 ************************************ 00:04:52.718 END TEST env 00:04:52.718 ************************************ 00:04:52.718 15:47:04 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:52.718 15:47:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.718 15:47:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.718 15:47:04 -- common/autotest_common.sh@10 -- # set +x 00:04:52.978 ************************************ 00:04:52.978 START TEST rpc 00:04:52.978 ************************************ 00:04:52.978 15:47:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:52.978 * Looking for test storage... 00:04:52.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:52.978 15:47:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:52.978 15:47:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:52.978 15:47:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.978 15:47:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.978 15:47:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.978 15:47:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.978 15:47:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.978 15:47:04 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.978 15:47:04 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.978 15:47:04 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.978 15:47:04 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.978 15:47:04 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.978 15:47:04 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.978 15:47:04 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.978 15:47:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.978 15:47:04 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.978 15:47:04 -- scripts/common.sh@344 -- # : 1 00:04:52.978 15:47:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.978 15:47:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.978 15:47:04 -- scripts/common.sh@364 -- # decimal 1 00:04:52.978 15:47:04 -- scripts/common.sh@352 -- # local d=1 00:04:52.978 15:47:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.978 15:47:04 -- scripts/common.sh@354 -- # echo 1 00:04:52.978 15:47:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.978 15:47:04 -- scripts/common.sh@365 -- # decimal 2 00:04:52.978 15:47:04 -- scripts/common.sh@352 -- # local d=2 00:04:52.978 15:47:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.978 15:47:04 -- scripts/common.sh@354 -- # echo 2 00:04:52.978 15:47:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.978 15:47:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.978 15:47:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.978 15:47:04 -- scripts/common.sh@367 -- # return 0 00:04:52.978 15:47:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.978 15:47:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.978 --rc genhtml_branch_coverage=1 00:04:52.978 --rc genhtml_function_coverage=1 00:04:52.978 --rc genhtml_legend=1 00:04:52.978 --rc geninfo_all_blocks=1 00:04:52.978 --rc geninfo_unexecuted_blocks=1 00:04:52.978 00:04:52.978 ' 00:04:52.978 15:47:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.978 --rc genhtml_branch_coverage=1 00:04:52.978 --rc genhtml_function_coverage=1 00:04:52.978 --rc genhtml_legend=1 00:04:52.978 --rc geninfo_all_blocks=1 00:04:52.978 --rc geninfo_unexecuted_blocks=1 00:04:52.978 00:04:52.978 ' 00:04:52.978 15:47:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.978 --rc genhtml_branch_coverage=1 00:04:52.978 --rc genhtml_function_coverage=1 00:04:52.978 --rc genhtml_legend=1 00:04:52.978 --rc geninfo_all_blocks=1 00:04:52.978 --rc geninfo_unexecuted_blocks=1 00:04:52.978 00:04:52.978 ' 00:04:52.978 15:47:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.978 --rc genhtml_branch_coverage=1 00:04:52.978 --rc genhtml_function_coverage=1 00:04:52.978 --rc genhtml_legend=1 00:04:52.978 --rc geninfo_all_blocks=1 00:04:52.978 --rc geninfo_unexecuted_blocks=1 00:04:52.978 00:04:52.978 ' 00:04:52.978 15:47:04 -- rpc/rpc.sh@65 -- # spdk_pid=56149 00:04:52.978 15:47:04 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.978 15:47:04 -- rpc/rpc.sh@67 -- # waitforlisten 56149 00:04:52.978 15:47:04 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:52.978 15:47:04 -- common/autotest_common.sh@829 -- # '[' -z 56149 ']' 00:04:52.978 15:47:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.978 15:47:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.978 15:47:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.978 15:47:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.978 15:47:04 -- common/autotest_common.sh@10 -- # set +x 00:04:52.978 [2024-11-29 15:47:04.364275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:52.978 [2024-11-29 15:47:04.364403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56149 ] 00:04:53.237 [2024-11-29 15:47:04.512127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.237 [2024-11-29 15:47:04.663010] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:53.237 [2024-11-29 15:47:04.663160] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:53.237 [2024-11-29 15:47:04.663171] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56149' to capture a snapshot of events at runtime. 00:04:53.237 [2024-11-29 15:47:04.663178] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56149 for offline analysis/debug. 00:04:53.237 [2024-11-29 15:47:04.663201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.803 15:47:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.803 15:47:05 -- common/autotest_common.sh@862 -- # return 0 00:04:53.803 15:47:05 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:53.803 15:47:05 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:53.803 15:47:05 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:53.803 15:47:05 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:53.803 15:47:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.803 15:47:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.803 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:53.803 ************************************ 00:04:53.803 START TEST rpc_integrity 00:04:53.803 ************************************ 00:04:53.803 15:47:05 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:53.803 15:47:05 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:53.803 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.803 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:53.803 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.803 15:47:05 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:53.803 15:47:05 -- rpc/rpc.sh@13 -- # jq length 00:04:53.803 15:47:05 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:53.803 15:47:05 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:53.803 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.803 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.061 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.061 15:47:05 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:54.061 15:47:05 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:54.061 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.061 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.061 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.061 15:47:05 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:54.061 { 00:04:54.061 "name": "Malloc0", 00:04:54.061 "aliases": [ 00:04:54.061 "1fa4cb54-c16c-45ce-88ec-f4031c0386a7" 00:04:54.061 ], 00:04:54.061 "product_name": "Malloc disk", 00:04:54.061 "block_size": 512, 00:04:54.062 "num_blocks": 16384, 00:04:54.062 "uuid": "1fa4cb54-c16c-45ce-88ec-f4031c0386a7", 00:04:54.062 "assigned_rate_limits": { 00:04:54.062 "rw_ios_per_sec": 0, 00:04:54.062 "rw_mbytes_per_sec": 0, 00:04:54.062 "r_mbytes_per_sec": 0, 00:04:54.062 "w_mbytes_per_sec": 0 00:04:54.062 }, 00:04:54.062 "claimed": false, 00:04:54.062 "zoned": false, 00:04:54.062 "supported_io_types": { 00:04:54.062 "read": true, 00:04:54.062 "write": true, 00:04:54.062 "unmap": true, 00:04:54.062 "write_zeroes": true, 00:04:54.062 "flush": true, 00:04:54.062 "reset": true, 00:04:54.062 "compare": false, 00:04:54.062 "compare_and_write": false, 00:04:54.062 "abort": true, 00:04:54.062 "nvme_admin": false, 00:04:54.062 "nvme_io": false 00:04:54.062 }, 00:04:54.062 "memory_domains": [ 00:04:54.062 { 00:04:54.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.062 "dma_device_type": 2 00:04:54.062 } 00:04:54.062 ], 00:04:54.062 "driver_specific": {} 00:04:54.062 } 00:04:54.062 ]' 00:04:54.062 15:47:05 -- rpc/rpc.sh@17 -- # jq length 00:04:54.062 15:47:05 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:54.062 15:47:05 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:54.062 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.062 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 [2024-11-29 15:47:05.289909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:54.062 [2024-11-29 15:47:05.289963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:54.062 [2024-11-29 15:47:05.289991] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:04:54.062 [2024-11-29 15:47:05.290002] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:54.062 [2024-11-29 15:47:05.291731] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:54.062 [2024-11-29 15:47:05.291767] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:54.062 Passthru0 00:04:54.062 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.062 15:47:05 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:54.062 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.062 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.062 15:47:05 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:54.062 { 00:04:54.062 "name": "Malloc0", 00:04:54.062 "aliases": [ 00:04:54.062 "1fa4cb54-c16c-45ce-88ec-f4031c0386a7" 00:04:54.062 ], 00:04:54.062 "product_name": "Malloc disk", 00:04:54.062 "block_size": 512, 00:04:54.062 "num_blocks": 16384, 00:04:54.062 "uuid": "1fa4cb54-c16c-45ce-88ec-f4031c0386a7", 00:04:54.062 "assigned_rate_limits": { 00:04:54.062 "rw_ios_per_sec": 0, 00:04:54.062 "rw_mbytes_per_sec": 0, 00:04:54.062 "r_mbytes_per_sec": 0, 00:04:54.062 "w_mbytes_per_sec": 0 00:04:54.062 }, 00:04:54.062 "claimed": true, 00:04:54.062 "claim_type": "exclusive_write", 00:04:54.062 "zoned": false, 00:04:54.062 "supported_io_types": { 00:04:54.062 "read": true, 00:04:54.062 "write": true, 00:04:54.062 "unmap": true, 00:04:54.062 "write_zeroes": true, 00:04:54.062 "flush": true, 00:04:54.062 "reset": true, 00:04:54.062 "compare": false, 00:04:54.062 "compare_and_write": false, 00:04:54.062 "abort": true, 00:04:54.062 "nvme_admin": false, 00:04:54.062 "nvme_io": false 00:04:54.062 }, 00:04:54.062 "memory_domains": [ 00:04:54.062 { 00:04:54.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.062 "dma_device_type": 2 00:04:54.062 } 00:04:54.062 ], 00:04:54.062 "driver_specific": {} 00:04:54.062 }, 00:04:54.062 { 00:04:54.062 "name": "Passthru0", 00:04:54.062 "aliases": [ 00:04:54.062 "b88247d4-cafd-5a1f-94f3-2fe107d886cf" 00:04:54.062 ], 00:04:54.062 "product_name": "passthru", 00:04:54.062 "block_size": 512, 00:04:54.062 "num_blocks": 16384, 00:04:54.062 "uuid": "b88247d4-cafd-5a1f-94f3-2fe107d886cf", 00:04:54.062 "assigned_rate_limits": { 00:04:54.062 "rw_ios_per_sec": 0, 00:04:54.062 "rw_mbytes_per_sec": 0, 00:04:54.062 "r_mbytes_per_sec": 0, 00:04:54.062 "w_mbytes_per_sec": 0 00:04:54.062 }, 00:04:54.062 "claimed": false, 00:04:54.062 "zoned": false, 00:04:54.062 "supported_io_types": { 00:04:54.062 "read": true, 00:04:54.062 "write": true, 00:04:54.062 "unmap": true, 00:04:54.062 "write_zeroes": true, 00:04:54.062 "flush": true, 00:04:54.062 "reset": true, 00:04:54.062 "compare": false, 00:04:54.062 "compare_and_write": false, 00:04:54.062 "abort": true, 00:04:54.062 "nvme_admin": false, 00:04:54.062 "nvme_io": false 00:04:54.062 }, 00:04:54.062 "memory_domains": [ 00:04:54.062 { 00:04:54.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.062 "dma_device_type": 2 00:04:54.062 } 00:04:54.062 ], 00:04:54.062 "driver_specific": { 00:04:54.062 "passthru": { 00:04:54.062 "name": "Passthru0", 00:04:54.062 "base_bdev_name": "Malloc0" 00:04:54.062 } 00:04:54.062 } 00:04:54.062 } 00:04:54.062 ]' 00:04:54.062 15:47:05 -- rpc/rpc.sh@21 -- # jq length 00:04:54.062 15:47:05 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:54.062 15:47:05 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:54.062 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.062 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.062 15:47:05 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:54.062 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.062 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.062 15:47:05 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:54.062 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.062 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.062 15:47:05 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:54.062 15:47:05 -- rpc/rpc.sh@26 -- # jq length 00:04:54.062 15:47:05 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:54.062 00:04:54.062 real 0m0.226s 00:04:54.062 user 0m0.127s 00:04:54.062 sys 0m0.026s 00:04:54.062 15:47:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.062 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 ************************************ 00:04:54.062 END TEST rpc_integrity 00:04:54.062 ************************************ 00:04:54.062 15:47:05 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:54.062 15:47:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.062 15:47:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.062 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 ************************************ 00:04:54.062 START TEST rpc_plugins 00:04:54.062 ************************************ 00:04:54.062 15:47:05 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:54.062 15:47:05 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:54.062 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.062 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.062 15:47:05 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:54.062 15:47:05 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:54.062 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.062 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.062 15:47:05 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:54.062 { 00:04:54.062 "name": "Malloc1", 00:04:54.062 "aliases": [ 00:04:54.062 "534e3924-fe92-4197-bb04-1c904c211604" 00:04:54.062 ], 00:04:54.062 "product_name": "Malloc disk", 00:04:54.062 "block_size": 4096, 00:04:54.062 "num_blocks": 256, 00:04:54.062 "uuid": "534e3924-fe92-4197-bb04-1c904c211604", 00:04:54.062 "assigned_rate_limits": { 00:04:54.062 "rw_ios_per_sec": 0, 00:04:54.062 "rw_mbytes_per_sec": 0, 00:04:54.062 "r_mbytes_per_sec": 0, 00:04:54.062 "w_mbytes_per_sec": 0 00:04:54.062 }, 00:04:54.062 "claimed": false, 00:04:54.062 "zoned": false, 00:04:54.062 "supported_io_types": { 00:04:54.062 "read": true, 00:04:54.062 "write": true, 00:04:54.062 "unmap": true, 00:04:54.062 "write_zeroes": true, 00:04:54.062 "flush": true, 00:04:54.062 "reset": true, 00:04:54.062 "compare": false, 00:04:54.062 "compare_and_write": false, 00:04:54.062 "abort": true, 00:04:54.062 "nvme_admin": false, 00:04:54.062 "nvme_io": false 00:04:54.062 }, 00:04:54.062 "memory_domains": [ 00:04:54.062 { 00:04:54.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.062 "dma_device_type": 2 00:04:54.062 } 00:04:54.062 ], 00:04:54.062 "driver_specific": {} 00:04:54.062 } 00:04:54.062 ]' 00:04:54.062 15:47:05 -- rpc/rpc.sh@32 -- # jq length 00:04:54.321 15:47:05 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:54.321 15:47:05 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:54.321 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.321 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.321 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.321 15:47:05 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:54.321 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.321 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.321 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.321 15:47:05 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:54.321 15:47:05 -- rpc/rpc.sh@36 -- # jq length 00:04:54.321 15:47:05 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:54.321 00:04:54.321 real 0m0.110s 00:04:54.321 user 0m0.061s 00:04:54.321 sys 0m0.017s 00:04:54.321 15:47:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.321 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.321 ************************************ 00:04:54.321 END TEST rpc_plugins 00:04:54.321 ************************************ 00:04:54.321 15:47:05 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:54.321 15:47:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.321 15:47:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.321 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.321 ************************************ 00:04:54.321 START TEST rpc_trace_cmd_test 00:04:54.321 ************************************ 00:04:54.321 15:47:05 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:54.321 15:47:05 -- rpc/rpc.sh@40 -- # local info 00:04:54.321 15:47:05 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:54.321 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.321 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.321 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.321 15:47:05 -- rpc/rpc.sh@42 -- # info='{ 00:04:54.321 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56149", 00:04:54.321 "tpoint_group_mask": "0x8", 00:04:54.321 "iscsi_conn": { 00:04:54.321 "mask": "0x2", 00:04:54.321 "tpoint_mask": "0x0" 00:04:54.321 }, 00:04:54.321 "scsi": { 00:04:54.321 "mask": "0x4", 00:04:54.321 "tpoint_mask": "0x0" 00:04:54.321 }, 00:04:54.321 "bdev": { 00:04:54.321 "mask": "0x8", 00:04:54.321 "tpoint_mask": "0xffffffffffffffff" 00:04:54.321 }, 00:04:54.321 "nvmf_rdma": { 00:04:54.321 "mask": "0x10", 00:04:54.321 "tpoint_mask": "0x0" 00:04:54.321 }, 00:04:54.321 "nvmf_tcp": { 00:04:54.321 "mask": "0x20", 00:04:54.321 "tpoint_mask": "0x0" 00:04:54.321 }, 00:04:54.321 "ftl": { 00:04:54.321 "mask": "0x40", 00:04:54.321 "tpoint_mask": "0x0" 00:04:54.321 }, 00:04:54.321 "blobfs": { 00:04:54.321 "mask": "0x80", 00:04:54.321 "tpoint_mask": "0x0" 00:04:54.321 }, 00:04:54.321 "dsa": { 00:04:54.321 "mask": "0x200", 00:04:54.321 "tpoint_mask": "0x0" 00:04:54.321 }, 00:04:54.321 "thread": { 00:04:54.321 "mask": "0x400", 00:04:54.321 "tpoint_mask": "0x0" 00:04:54.321 }, 00:04:54.321 "nvme_pcie": { 00:04:54.321 "mask": "0x800", 00:04:54.321 "tpoint_mask": "0x0" 00:04:54.321 }, 00:04:54.321 "iaa": { 00:04:54.321 "mask": "0x1000", 00:04:54.321 "tpoint_mask": "0x0" 00:04:54.321 }, 00:04:54.321 "nvme_tcp": { 00:04:54.321 "mask": "0x2000", 00:04:54.321 "tpoint_mask": "0x0" 00:04:54.321 }, 00:04:54.321 "bdev_nvme": { 00:04:54.321 "mask": "0x4000", 00:04:54.321 "tpoint_mask": "0x0" 00:04:54.321 } 00:04:54.321 }' 00:04:54.321 15:47:05 -- rpc/rpc.sh@43 -- # jq length 00:04:54.322 15:47:05 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:54.322 15:47:05 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:54.322 15:47:05 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:54.322 15:47:05 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:54.322 15:47:05 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:54.322 15:47:05 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:54.322 15:47:05 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:54.322 15:47:05 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:54.581 15:47:05 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:54.581 00:04:54.581 real 0m0.164s 00:04:54.581 user 0m0.132s 00:04:54.581 sys 0m0.026s 00:04:54.581 15:47:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.581 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.581 ************************************ 00:04:54.581 END TEST rpc_trace_cmd_test 00:04:54.581 ************************************ 00:04:54.581 15:47:05 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:54.581 15:47:05 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:54.581 15:47:05 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:54.581 15:47:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.581 15:47:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.581 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.581 ************************************ 00:04:54.581 START TEST rpc_daemon_integrity 00:04:54.581 ************************************ 00:04:54.581 15:47:05 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:54.581 15:47:05 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:54.581 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.581 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.581 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.581 15:47:05 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:54.581 15:47:05 -- rpc/rpc.sh@13 -- # jq length 00:04:54.581 15:47:05 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:54.581 15:47:05 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:54.581 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.581 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.581 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.581 15:47:05 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:54.581 15:47:05 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:54.581 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.581 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.581 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.581 15:47:05 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:54.581 { 00:04:54.581 "name": "Malloc2", 00:04:54.581 "aliases": [ 00:04:54.581 "2989ec0f-3727-4adb-8d46-1a8a89a5aa88" 00:04:54.581 ], 00:04:54.581 "product_name": "Malloc disk", 00:04:54.581 "block_size": 512, 00:04:54.581 "num_blocks": 16384, 00:04:54.581 "uuid": "2989ec0f-3727-4adb-8d46-1a8a89a5aa88", 00:04:54.581 "assigned_rate_limits": { 00:04:54.581 "rw_ios_per_sec": 0, 00:04:54.581 "rw_mbytes_per_sec": 0, 00:04:54.581 "r_mbytes_per_sec": 0, 00:04:54.581 "w_mbytes_per_sec": 0 00:04:54.581 }, 00:04:54.581 "claimed": false, 00:04:54.581 "zoned": false, 00:04:54.581 "supported_io_types": { 00:04:54.581 "read": true, 00:04:54.581 "write": true, 00:04:54.581 "unmap": true, 00:04:54.581 "write_zeroes": true, 00:04:54.581 "flush": true, 00:04:54.581 "reset": true, 00:04:54.581 "compare": false, 00:04:54.581 "compare_and_write": false, 00:04:54.581 "abort": true, 00:04:54.581 "nvme_admin": false, 00:04:54.581 "nvme_io": false 00:04:54.581 }, 00:04:54.581 "memory_domains": [ 00:04:54.581 { 00:04:54.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.581 "dma_device_type": 2 00:04:54.581 } 00:04:54.581 ], 00:04:54.581 "driver_specific": {} 00:04:54.581 } 00:04:54.581 ]' 00:04:54.581 15:47:05 -- rpc/rpc.sh@17 -- # jq length 00:04:54.581 15:47:05 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:54.581 15:47:05 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:54.581 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.581 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.581 [2024-11-29 15:47:05.926481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:54.581 [2024-11-29 15:47:05.926528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:54.581 [2024-11-29 15:47:05.926543] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:04:54.581 [2024-11-29 15:47:05.926551] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:54.581 [2024-11-29 15:47:05.928184] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:54.581 [2024-11-29 15:47:05.928215] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:54.581 Passthru0 00:04:54.581 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.581 15:47:05 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:54.581 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.581 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.581 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.581 15:47:05 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:54.581 { 00:04:54.581 "name": "Malloc2", 00:04:54.581 "aliases": [ 00:04:54.581 "2989ec0f-3727-4adb-8d46-1a8a89a5aa88" 00:04:54.581 ], 00:04:54.581 "product_name": "Malloc disk", 00:04:54.581 "block_size": 512, 00:04:54.581 "num_blocks": 16384, 00:04:54.581 "uuid": "2989ec0f-3727-4adb-8d46-1a8a89a5aa88", 00:04:54.581 "assigned_rate_limits": { 00:04:54.581 "rw_ios_per_sec": 0, 00:04:54.581 "rw_mbytes_per_sec": 0, 00:04:54.581 "r_mbytes_per_sec": 0, 00:04:54.581 "w_mbytes_per_sec": 0 00:04:54.581 }, 00:04:54.581 "claimed": true, 00:04:54.581 "claim_type": "exclusive_write", 00:04:54.581 "zoned": false, 00:04:54.581 "supported_io_types": { 00:04:54.581 "read": true, 00:04:54.581 "write": true, 00:04:54.581 "unmap": true, 00:04:54.581 "write_zeroes": true, 00:04:54.581 "flush": true, 00:04:54.581 "reset": true, 00:04:54.581 "compare": false, 00:04:54.581 "compare_and_write": false, 00:04:54.581 "abort": true, 00:04:54.581 "nvme_admin": false, 00:04:54.581 "nvme_io": false 00:04:54.581 }, 00:04:54.581 "memory_domains": [ 00:04:54.581 { 00:04:54.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.581 "dma_device_type": 2 00:04:54.581 } 00:04:54.581 ], 00:04:54.581 "driver_specific": {} 00:04:54.581 }, 00:04:54.581 { 00:04:54.581 "name": "Passthru0", 00:04:54.581 "aliases": [ 00:04:54.581 "26ca9746-1086-5a06-be91-1e5ccc344984" 00:04:54.581 ], 00:04:54.581 "product_name": "passthru", 00:04:54.581 "block_size": 512, 00:04:54.581 "num_blocks": 16384, 00:04:54.581 "uuid": "26ca9746-1086-5a06-be91-1e5ccc344984", 00:04:54.581 "assigned_rate_limits": { 00:04:54.581 "rw_ios_per_sec": 0, 00:04:54.581 "rw_mbytes_per_sec": 0, 00:04:54.581 "r_mbytes_per_sec": 0, 00:04:54.581 "w_mbytes_per_sec": 0 00:04:54.581 }, 00:04:54.581 "claimed": false, 00:04:54.581 "zoned": false, 00:04:54.581 "supported_io_types": { 00:04:54.581 "read": true, 00:04:54.581 "write": true, 00:04:54.581 "unmap": true, 00:04:54.581 "write_zeroes": true, 00:04:54.581 "flush": true, 00:04:54.581 "reset": true, 00:04:54.582 "compare": false, 00:04:54.582 "compare_and_write": false, 00:04:54.582 "abort": true, 00:04:54.582 "nvme_admin": false, 00:04:54.582 "nvme_io": false 00:04:54.582 }, 00:04:54.582 "memory_domains": [ 00:04:54.582 { 00:04:54.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.582 "dma_device_type": 2 00:04:54.582 } 00:04:54.582 ], 00:04:54.582 "driver_specific": { 00:04:54.582 "passthru": { 00:04:54.582 "name": "Passthru0", 00:04:54.582 "base_bdev_name": "Malloc2" 00:04:54.582 } 00:04:54.582 } 00:04:54.582 } 00:04:54.582 ]' 00:04:54.582 15:47:05 -- rpc/rpc.sh@21 -- # jq length 00:04:54.582 15:47:05 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:54.582 15:47:05 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:54.582 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.582 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.582 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.582 15:47:05 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:54.582 15:47:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.582 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:54.582 15:47:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.582 15:47:06 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:54.582 15:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.582 15:47:06 -- common/autotest_common.sh@10 -- # set +x 00:04:54.841 15:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.841 15:47:06 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:54.841 15:47:06 -- rpc/rpc.sh@26 -- # jq length 00:04:54.841 15:47:06 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:54.841 00:04:54.841 real 0m0.233s 00:04:54.841 user 0m0.121s 00:04:54.841 sys 0m0.035s 00:04:54.841 15:47:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.841 15:47:06 -- common/autotest_common.sh@10 -- # set +x 00:04:54.841 ************************************ 00:04:54.841 END TEST rpc_daemon_integrity 00:04:54.841 ************************************ 00:04:54.841 15:47:06 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:54.841 15:47:06 -- rpc/rpc.sh@84 -- # killprocess 56149 00:04:54.841 15:47:06 -- common/autotest_common.sh@936 -- # '[' -z 56149 ']' 00:04:54.841 15:47:06 -- common/autotest_common.sh@940 -- # kill -0 56149 00:04:54.841 15:47:06 -- common/autotest_common.sh@941 -- # uname 00:04:54.841 15:47:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:54.841 15:47:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56149 00:04:54.841 15:47:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:54.841 15:47:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:54.841 15:47:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56149' 00:04:54.841 killing process with pid 56149 00:04:54.841 15:47:06 -- common/autotest_common.sh@955 -- # kill 56149 00:04:54.841 15:47:06 -- common/autotest_common.sh@960 -- # wait 56149 00:04:56.216 00:04:56.216 real 0m3.110s 00:04:56.216 user 0m3.503s 00:04:56.216 sys 0m0.584s 00:04:56.216 15:47:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.216 15:47:07 -- common/autotest_common.sh@10 -- # set +x 00:04:56.216 ************************************ 00:04:56.216 END TEST rpc 00:04:56.216 ************************************ 00:04:56.216 15:47:07 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:56.216 15:47:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.216 15:47:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.216 15:47:07 -- common/autotest_common.sh@10 -- # set +x 00:04:56.216 ************************************ 00:04:56.216 START TEST rpc_client 00:04:56.216 ************************************ 00:04:56.216 15:47:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:56.216 * Looking for test storage... 00:04:56.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:56.216 15:47:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:56.216 15:47:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:56.216 15:47:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:56.216 15:47:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:56.216 15:47:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:56.217 15:47:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:56.217 15:47:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:56.217 15:47:07 -- scripts/common.sh@335 -- # IFS=.-: 00:04:56.217 15:47:07 -- scripts/common.sh@335 -- # read -ra ver1 00:04:56.217 15:47:07 -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.217 15:47:07 -- scripts/common.sh@336 -- # read -ra ver2 00:04:56.217 15:47:07 -- scripts/common.sh@337 -- # local 'op=<' 00:04:56.217 15:47:07 -- scripts/common.sh@339 -- # ver1_l=2 00:04:56.217 15:47:07 -- scripts/common.sh@340 -- # ver2_l=1 00:04:56.217 15:47:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:56.217 15:47:07 -- scripts/common.sh@343 -- # case "$op" in 00:04:56.217 15:47:07 -- scripts/common.sh@344 -- # : 1 00:04:56.217 15:47:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:56.217 15:47:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.217 15:47:07 -- scripts/common.sh@364 -- # decimal 1 00:04:56.217 15:47:07 -- scripts/common.sh@352 -- # local d=1 00:04:56.217 15:47:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.217 15:47:07 -- scripts/common.sh@354 -- # echo 1 00:04:56.217 15:47:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:56.217 15:47:07 -- scripts/common.sh@365 -- # decimal 2 00:04:56.217 15:47:07 -- scripts/common.sh@352 -- # local d=2 00:04:56.217 15:47:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.217 15:47:07 -- scripts/common.sh@354 -- # echo 2 00:04:56.217 15:47:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:56.217 15:47:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:56.217 15:47:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:56.217 15:47:07 -- scripts/common.sh@367 -- # return 0 00:04:56.217 15:47:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.217 15:47:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.217 --rc genhtml_branch_coverage=1 00:04:56.217 --rc genhtml_function_coverage=1 00:04:56.217 --rc genhtml_legend=1 00:04:56.217 --rc geninfo_all_blocks=1 00:04:56.217 --rc geninfo_unexecuted_blocks=1 00:04:56.217 00:04:56.217 ' 00:04:56.217 15:47:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.217 --rc genhtml_branch_coverage=1 00:04:56.217 --rc genhtml_function_coverage=1 00:04:56.217 --rc genhtml_legend=1 00:04:56.217 --rc geninfo_all_blocks=1 00:04:56.217 --rc geninfo_unexecuted_blocks=1 00:04:56.217 00:04:56.217 ' 00:04:56.217 15:47:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.217 --rc genhtml_branch_coverage=1 00:04:56.217 --rc genhtml_function_coverage=1 00:04:56.217 --rc genhtml_legend=1 00:04:56.217 --rc geninfo_all_blocks=1 00:04:56.217 --rc geninfo_unexecuted_blocks=1 00:04:56.217 00:04:56.217 ' 00:04:56.217 15:47:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.217 --rc genhtml_branch_coverage=1 00:04:56.217 --rc genhtml_function_coverage=1 00:04:56.217 --rc genhtml_legend=1 00:04:56.217 --rc geninfo_all_blocks=1 00:04:56.217 --rc geninfo_unexecuted_blocks=1 00:04:56.217 00:04:56.217 ' 00:04:56.217 15:47:07 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:56.217 OK 00:04:56.217 15:47:07 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:56.217 00:04:56.217 real 0m0.173s 00:04:56.217 user 0m0.101s 00:04:56.217 sys 0m0.082s 00:04:56.217 15:47:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.217 15:47:07 -- common/autotest_common.sh@10 -- # set +x 00:04:56.217 ************************************ 00:04:56.217 END TEST rpc_client 00:04:56.217 ************************************ 00:04:56.217 15:47:07 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:56.217 15:47:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.217 15:47:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.217 15:47:07 -- common/autotest_common.sh@10 -- # set +x 00:04:56.217 ************************************ 00:04:56.217 START TEST json_config 00:04:56.217 ************************************ 00:04:56.217 15:47:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:56.217 15:47:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:56.217 15:47:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:56.217 15:47:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:56.217 15:47:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:56.217 15:47:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:56.217 15:47:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:56.217 15:47:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:56.217 15:47:07 -- scripts/common.sh@335 -- # IFS=.-: 00:04:56.217 15:47:07 -- scripts/common.sh@335 -- # read -ra ver1 00:04:56.217 15:47:07 -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.217 15:47:07 -- scripts/common.sh@336 -- # read -ra ver2 00:04:56.217 15:47:07 -- scripts/common.sh@337 -- # local 'op=<' 00:04:56.217 15:47:07 -- scripts/common.sh@339 -- # ver1_l=2 00:04:56.217 15:47:07 -- scripts/common.sh@340 -- # ver2_l=1 00:04:56.217 15:47:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:56.217 15:47:07 -- scripts/common.sh@343 -- # case "$op" in 00:04:56.217 15:47:07 -- scripts/common.sh@344 -- # : 1 00:04:56.217 15:47:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:56.217 15:47:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.217 15:47:07 -- scripts/common.sh@364 -- # decimal 1 00:04:56.217 15:47:07 -- scripts/common.sh@352 -- # local d=1 00:04:56.217 15:47:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.217 15:47:07 -- scripts/common.sh@354 -- # echo 1 00:04:56.217 15:47:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:56.217 15:47:07 -- scripts/common.sh@365 -- # decimal 2 00:04:56.217 15:47:07 -- scripts/common.sh@352 -- # local d=2 00:04:56.217 15:47:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.217 15:47:07 -- scripts/common.sh@354 -- # echo 2 00:04:56.217 15:47:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:56.217 15:47:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:56.217 15:47:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:56.217 15:47:07 -- scripts/common.sh@367 -- # return 0 00:04:56.217 15:47:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.217 15:47:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.217 --rc genhtml_branch_coverage=1 00:04:56.217 --rc genhtml_function_coverage=1 00:04:56.217 --rc genhtml_legend=1 00:04:56.217 --rc geninfo_all_blocks=1 00:04:56.217 --rc geninfo_unexecuted_blocks=1 00:04:56.217 00:04:56.217 ' 00:04:56.217 15:47:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.217 --rc genhtml_branch_coverage=1 00:04:56.217 --rc genhtml_function_coverage=1 00:04:56.217 --rc genhtml_legend=1 00:04:56.217 --rc geninfo_all_blocks=1 00:04:56.217 --rc geninfo_unexecuted_blocks=1 00:04:56.217 00:04:56.217 ' 00:04:56.217 15:47:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.217 --rc genhtml_branch_coverage=1 00:04:56.217 --rc genhtml_function_coverage=1 00:04:56.217 --rc genhtml_legend=1 00:04:56.217 --rc geninfo_all_blocks=1 00:04:56.217 --rc geninfo_unexecuted_blocks=1 00:04:56.217 00:04:56.217 ' 00:04:56.217 15:47:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.217 --rc genhtml_branch_coverage=1 00:04:56.217 --rc genhtml_function_coverage=1 00:04:56.217 --rc genhtml_legend=1 00:04:56.217 --rc geninfo_all_blocks=1 00:04:56.217 --rc geninfo_unexecuted_blocks=1 00:04:56.217 00:04:56.217 ' 00:04:56.217 15:47:07 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:56.217 15:47:07 -- nvmf/common.sh@7 -- # uname -s 00:04:56.217 15:47:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.217 15:47:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.217 15:47:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.217 15:47:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.217 15:47:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.217 15:47:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.217 15:47:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.217 15:47:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.217 15:47:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.217 15:47:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.478 15:47:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4930376-339e-47d0-8578-dd6c8fd2c062 00:04:56.479 15:47:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=a4930376-339e-47d0-8578-dd6c8fd2c062 00:04:56.479 15:47:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.479 15:47:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.479 15:47:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.479 15:47:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.479 15:47:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.479 15:47:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.479 15:47:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.479 15:47:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.479 15:47:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.479 15:47:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.479 15:47:07 -- paths/export.sh@5 -- # export PATH 00:04:56.479 15:47:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.479 15:47:07 -- nvmf/common.sh@46 -- # : 0 00:04:56.479 15:47:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:56.479 15:47:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:56.479 15:47:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:56.479 15:47:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.479 15:47:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.479 15:47:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:56.479 15:47:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:56.479 15:47:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:56.479 15:47:07 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:56.479 15:47:07 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:56.479 15:47:07 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:56.479 15:47:07 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:56.479 WARNING: No tests are enabled so not running JSON configuration tests 00:04:56.479 15:47:07 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:56.479 15:47:07 -- json_config/json_config.sh@27 -- # exit 0 00:04:56.479 00:04:56.479 real 0m0.137s 00:04:56.479 user 0m0.086s 00:04:56.479 sys 0m0.056s 00:04:56.479 15:47:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.479 15:47:07 -- common/autotest_common.sh@10 -- # set +x 00:04:56.479 ************************************ 00:04:56.479 END TEST json_config 00:04:56.479 ************************************ 00:04:56.479 15:47:07 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:56.479 15:47:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.479 15:47:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.479 15:47:07 -- common/autotest_common.sh@10 -- # set +x 00:04:56.479 ************************************ 00:04:56.479 START TEST json_config_extra_key 00:04:56.479 ************************************ 00:04:56.479 15:47:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:56.479 15:47:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:56.479 15:47:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:56.479 15:47:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:56.479 15:47:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:56.479 15:47:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:56.479 15:47:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:56.479 15:47:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:56.479 15:47:07 -- scripts/common.sh@335 -- # IFS=.-: 00:04:56.479 15:47:07 -- scripts/common.sh@335 -- # read -ra ver1 00:04:56.479 15:47:07 -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.479 15:47:07 -- scripts/common.sh@336 -- # read -ra ver2 00:04:56.479 15:47:07 -- scripts/common.sh@337 -- # local 'op=<' 00:04:56.479 15:47:07 -- scripts/common.sh@339 -- # ver1_l=2 00:04:56.479 15:47:07 -- scripts/common.sh@340 -- # ver2_l=1 00:04:56.479 15:47:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:56.479 15:47:07 -- scripts/common.sh@343 -- # case "$op" in 00:04:56.479 15:47:07 -- scripts/common.sh@344 -- # : 1 00:04:56.479 15:47:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:56.479 15:47:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.479 15:47:07 -- scripts/common.sh@364 -- # decimal 1 00:04:56.479 15:47:07 -- scripts/common.sh@352 -- # local d=1 00:04:56.479 15:47:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.479 15:47:07 -- scripts/common.sh@354 -- # echo 1 00:04:56.479 15:47:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:56.479 15:47:07 -- scripts/common.sh@365 -- # decimal 2 00:04:56.479 15:47:07 -- scripts/common.sh@352 -- # local d=2 00:04:56.479 15:47:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.479 15:47:07 -- scripts/common.sh@354 -- # echo 2 00:04:56.479 15:47:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:56.479 15:47:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:56.479 15:47:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:56.479 15:47:07 -- scripts/common.sh@367 -- # return 0 00:04:56.479 15:47:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.479 15:47:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:56.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.479 --rc genhtml_branch_coverage=1 00:04:56.479 --rc genhtml_function_coverage=1 00:04:56.479 --rc genhtml_legend=1 00:04:56.479 --rc geninfo_all_blocks=1 00:04:56.479 --rc geninfo_unexecuted_blocks=1 00:04:56.479 00:04:56.479 ' 00:04:56.479 15:47:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:56.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.479 --rc genhtml_branch_coverage=1 00:04:56.479 --rc genhtml_function_coverage=1 00:04:56.479 --rc genhtml_legend=1 00:04:56.479 --rc geninfo_all_blocks=1 00:04:56.479 --rc geninfo_unexecuted_blocks=1 00:04:56.479 00:04:56.479 ' 00:04:56.479 15:47:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:56.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.479 --rc genhtml_branch_coverage=1 00:04:56.479 --rc genhtml_function_coverage=1 00:04:56.479 --rc genhtml_legend=1 00:04:56.479 --rc geninfo_all_blocks=1 00:04:56.479 --rc geninfo_unexecuted_blocks=1 00:04:56.479 00:04:56.479 ' 00:04:56.479 15:47:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:56.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.479 --rc genhtml_branch_coverage=1 00:04:56.479 --rc genhtml_function_coverage=1 00:04:56.479 --rc genhtml_legend=1 00:04:56.479 --rc geninfo_all_blocks=1 00:04:56.479 --rc geninfo_unexecuted_blocks=1 00:04:56.479 00:04:56.479 ' 00:04:56.479 15:47:07 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:56.479 15:47:07 -- nvmf/common.sh@7 -- # uname -s 00:04:56.479 15:47:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.479 15:47:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.479 15:47:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.479 15:47:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.479 15:47:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.479 15:47:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.479 15:47:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.479 15:47:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.479 15:47:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.479 15:47:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.479 15:47:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4930376-339e-47d0-8578-dd6c8fd2c062 00:04:56.479 15:47:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=a4930376-339e-47d0-8578-dd6c8fd2c062 00:04:56.479 15:47:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.479 15:47:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.479 15:47:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.479 15:47:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.479 15:47:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.479 15:47:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.479 15:47:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.479 15:47:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.479 15:47:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.480 15:47:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.480 15:47:07 -- paths/export.sh@5 -- # export PATH 00:04:56.480 15:47:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.480 15:47:07 -- nvmf/common.sh@46 -- # : 0 00:04:56.480 15:47:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:56.480 15:47:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:56.480 15:47:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:56.480 15:47:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.480 15:47:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.480 15:47:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:56.480 15:47:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:56.480 15:47:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:56.480 INFO: launching applications... 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56454 00:04:56.480 Waiting for target to run... 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56454 /var/tmp/spdk_tgt.sock 00:04:56.480 15:47:07 -- common/autotest_common.sh@829 -- # '[' -z 56454 ']' 00:04:56.480 15:47:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.480 15:47:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.480 15:47:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.480 15:47:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.480 15:47:07 -- common/autotest_common.sh@10 -- # set +x 00:04:56.480 15:47:07 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:56.480 [2024-11-29 15:47:07.892205] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:56.480 [2024-11-29 15:47:07.892339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56454 ] 00:04:57.111 [2024-11-29 15:47:08.247845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.111 [2024-11-29 15:47:08.383415] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:57.111 [2024-11-29 15:47:08.383575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.045 15:47:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.045 15:47:09 -- common/autotest_common.sh@862 -- # return 0 00:04:58.045 15:47:09 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:58.045 00:04:58.045 INFO: shutting down applications... 00:04:58.045 15:47:09 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:58.045 15:47:09 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:58.045 15:47:09 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:58.045 15:47:09 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:58.045 15:47:09 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56454 ]] 00:04:58.045 15:47:09 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56454 00:04:58.045 15:47:09 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:58.045 15:47:09 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:58.045 15:47:09 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56454 00:04:58.045 15:47:09 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:58.611 15:47:09 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:58.611 15:47:09 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:58.611 15:47:09 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56454 00:04:58.611 15:47:09 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:59.177 15:47:10 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:59.177 15:47:10 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:59.177 15:47:10 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56454 00:04:59.177 15:47:10 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:59.744 15:47:10 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:59.744 15:47:10 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:59.744 15:47:10 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56454 00:04:59.744 15:47:10 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:59.744 15:47:10 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:59.744 15:47:10 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:59.744 SPDK target shutdown done 00:04:59.744 15:47:10 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:59.744 Success 00:04:59.744 15:47:10 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:59.744 00:04:59.744 real 0m3.211s 00:04:59.744 user 0m3.047s 00:04:59.744 sys 0m0.449s 00:04:59.744 15:47:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.744 15:47:10 -- common/autotest_common.sh@10 -- # set +x 00:04:59.744 ************************************ 00:04:59.744 END TEST json_config_extra_key 00:04:59.744 ************************************ 00:04:59.744 15:47:10 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.744 15:47:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.744 15:47:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.744 15:47:10 -- common/autotest_common.sh@10 -- # set +x 00:04:59.744 ************************************ 00:04:59.744 START TEST alias_rpc 00:04:59.744 ************************************ 00:04:59.744 15:47:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.744 * Looking for test storage... 00:04:59.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:59.744 15:47:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:59.744 15:47:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:59.744 15:47:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:59.744 15:47:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:59.744 15:47:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:59.744 15:47:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:59.744 15:47:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:59.744 15:47:11 -- scripts/common.sh@335 -- # IFS=.-: 00:04:59.744 15:47:11 -- scripts/common.sh@335 -- # read -ra ver1 00:04:59.744 15:47:11 -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.744 15:47:11 -- scripts/common.sh@336 -- # read -ra ver2 00:04:59.744 15:47:11 -- scripts/common.sh@337 -- # local 'op=<' 00:04:59.744 15:47:11 -- scripts/common.sh@339 -- # ver1_l=2 00:04:59.744 15:47:11 -- scripts/common.sh@340 -- # ver2_l=1 00:04:59.744 15:47:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:59.744 15:47:11 -- scripts/common.sh@343 -- # case "$op" in 00:04:59.744 15:47:11 -- scripts/common.sh@344 -- # : 1 00:04:59.744 15:47:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:59.744 15:47:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.744 15:47:11 -- scripts/common.sh@364 -- # decimal 1 00:04:59.744 15:47:11 -- scripts/common.sh@352 -- # local d=1 00:04:59.744 15:47:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.744 15:47:11 -- scripts/common.sh@354 -- # echo 1 00:04:59.744 15:47:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:59.744 15:47:11 -- scripts/common.sh@365 -- # decimal 2 00:04:59.744 15:47:11 -- scripts/common.sh@352 -- # local d=2 00:04:59.744 15:47:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.744 15:47:11 -- scripts/common.sh@354 -- # echo 2 00:04:59.744 15:47:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:59.744 15:47:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:59.744 15:47:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:59.744 15:47:11 -- scripts/common.sh@367 -- # return 0 00:04:59.744 15:47:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.744 15:47:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:59.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.745 --rc genhtml_branch_coverage=1 00:04:59.745 --rc genhtml_function_coverage=1 00:04:59.745 --rc genhtml_legend=1 00:04:59.745 --rc geninfo_all_blocks=1 00:04:59.745 --rc geninfo_unexecuted_blocks=1 00:04:59.745 00:04:59.745 ' 00:04:59.745 15:47:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:59.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.745 --rc genhtml_branch_coverage=1 00:04:59.745 --rc genhtml_function_coverage=1 00:04:59.745 --rc genhtml_legend=1 00:04:59.745 --rc geninfo_all_blocks=1 00:04:59.745 --rc geninfo_unexecuted_blocks=1 00:04:59.745 00:04:59.745 ' 00:04:59.745 15:47:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:59.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.745 --rc genhtml_branch_coverage=1 00:04:59.745 --rc genhtml_function_coverage=1 00:04:59.745 --rc genhtml_legend=1 00:04:59.745 --rc geninfo_all_blocks=1 00:04:59.745 --rc geninfo_unexecuted_blocks=1 00:04:59.745 00:04:59.745 ' 00:04:59.745 15:47:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:59.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.745 --rc genhtml_branch_coverage=1 00:04:59.745 --rc genhtml_function_coverage=1 00:04:59.745 --rc genhtml_legend=1 00:04:59.745 --rc geninfo_all_blocks=1 00:04:59.745 --rc geninfo_unexecuted_blocks=1 00:04:59.745 00:04:59.745 ' 00:04:59.745 15:47:11 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:59.745 15:47:11 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56547 00:04:59.745 15:47:11 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56547 00:04:59.745 15:47:11 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.745 15:47:11 -- common/autotest_common.sh@829 -- # '[' -z 56547 ']' 00:04:59.745 15:47:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.745 15:47:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.745 15:47:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.745 15:47:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.745 15:47:11 -- common/autotest_common.sh@10 -- # set +x 00:04:59.745 [2024-11-29 15:47:11.143290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:59.745 [2024-11-29 15:47:11.143407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56547 ] 00:05:00.004 [2024-11-29 15:47:11.291785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.261 [2024-11-29 15:47:11.441311] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:00.261 [2024-11-29 15:47:11.441489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.827 15:47:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.827 15:47:11 -- common/autotest_common.sh@862 -- # return 0 00:05:00.827 15:47:11 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:00.827 15:47:12 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56547 00:05:00.827 15:47:12 -- common/autotest_common.sh@936 -- # '[' -z 56547 ']' 00:05:00.827 15:47:12 -- common/autotest_common.sh@940 -- # kill -0 56547 00:05:00.827 15:47:12 -- common/autotest_common.sh@941 -- # uname 00:05:00.827 15:47:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:00.827 15:47:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56547 00:05:00.827 killing process with pid 56547 00:05:00.827 15:47:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:00.827 15:47:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:00.827 15:47:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56547' 00:05:00.827 15:47:12 -- common/autotest_common.sh@955 -- # kill 56547 00:05:00.827 15:47:12 -- common/autotest_common.sh@960 -- # wait 56547 00:05:02.199 00:05:02.199 real 0m2.430s 00:05:02.199 user 0m2.520s 00:05:02.199 sys 0m0.380s 00:05:02.199 15:47:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.199 15:47:13 -- common/autotest_common.sh@10 -- # set +x 00:05:02.199 ************************************ 00:05:02.199 END TEST alias_rpc 00:05:02.199 ************************************ 00:05:02.199 15:47:13 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:02.199 15:47:13 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:02.199 15:47:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.199 15:47:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.199 15:47:13 -- common/autotest_common.sh@10 -- # set +x 00:05:02.199 ************************************ 00:05:02.199 START TEST spdkcli_tcp 00:05:02.199 ************************************ 00:05:02.199 15:47:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:02.199 * Looking for test storage... 00:05:02.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:02.199 15:47:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:02.199 15:47:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:02.199 15:47:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:02.199 15:47:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:02.199 15:47:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:02.199 15:47:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:02.199 15:47:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:02.199 15:47:13 -- scripts/common.sh@335 -- # IFS=.-: 00:05:02.199 15:47:13 -- scripts/common.sh@335 -- # read -ra ver1 00:05:02.199 15:47:13 -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.199 15:47:13 -- scripts/common.sh@336 -- # read -ra ver2 00:05:02.199 15:47:13 -- scripts/common.sh@337 -- # local 'op=<' 00:05:02.199 15:47:13 -- scripts/common.sh@339 -- # ver1_l=2 00:05:02.199 15:47:13 -- scripts/common.sh@340 -- # ver2_l=1 00:05:02.199 15:47:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:02.199 15:47:13 -- scripts/common.sh@343 -- # case "$op" in 00:05:02.199 15:47:13 -- scripts/common.sh@344 -- # : 1 00:05:02.199 15:47:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:02.199 15:47:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.199 15:47:13 -- scripts/common.sh@364 -- # decimal 1 00:05:02.199 15:47:13 -- scripts/common.sh@352 -- # local d=1 00:05:02.199 15:47:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.199 15:47:13 -- scripts/common.sh@354 -- # echo 1 00:05:02.199 15:47:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:02.199 15:47:13 -- scripts/common.sh@365 -- # decimal 2 00:05:02.199 15:47:13 -- scripts/common.sh@352 -- # local d=2 00:05:02.199 15:47:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.199 15:47:13 -- scripts/common.sh@354 -- # echo 2 00:05:02.199 15:47:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:02.199 15:47:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:02.199 15:47:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:02.199 15:47:13 -- scripts/common.sh@367 -- # return 0 00:05:02.199 15:47:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.199 15:47:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:02.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.199 --rc genhtml_branch_coverage=1 00:05:02.199 --rc genhtml_function_coverage=1 00:05:02.199 --rc genhtml_legend=1 00:05:02.199 --rc geninfo_all_blocks=1 00:05:02.199 --rc geninfo_unexecuted_blocks=1 00:05:02.199 00:05:02.199 ' 00:05:02.199 15:47:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:02.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.199 --rc genhtml_branch_coverage=1 00:05:02.199 --rc genhtml_function_coverage=1 00:05:02.199 --rc genhtml_legend=1 00:05:02.199 --rc geninfo_all_blocks=1 00:05:02.199 --rc geninfo_unexecuted_blocks=1 00:05:02.199 00:05:02.199 ' 00:05:02.199 15:47:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:02.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.199 --rc genhtml_branch_coverage=1 00:05:02.199 --rc genhtml_function_coverage=1 00:05:02.199 --rc genhtml_legend=1 00:05:02.199 --rc geninfo_all_blocks=1 00:05:02.199 --rc geninfo_unexecuted_blocks=1 00:05:02.199 00:05:02.199 ' 00:05:02.199 15:47:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:02.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.199 --rc genhtml_branch_coverage=1 00:05:02.199 --rc genhtml_function_coverage=1 00:05:02.199 --rc genhtml_legend=1 00:05:02.199 --rc geninfo_all_blocks=1 00:05:02.199 --rc geninfo_unexecuted_blocks=1 00:05:02.199 00:05:02.199 ' 00:05:02.199 15:47:13 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:02.199 15:47:13 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:02.199 15:47:13 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:02.199 15:47:13 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:02.199 15:47:13 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:02.199 15:47:13 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:02.199 15:47:13 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:02.199 15:47:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.199 15:47:13 -- common/autotest_common.sh@10 -- # set +x 00:05:02.199 15:47:13 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56631 00:05:02.199 15:47:13 -- spdkcli/tcp.sh@27 -- # waitforlisten 56631 00:05:02.199 15:47:13 -- common/autotest_common.sh@829 -- # '[' -z 56631 ']' 00:05:02.199 15:47:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.199 15:47:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.200 15:47:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.200 15:47:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.200 15:47:13 -- common/autotest_common.sh@10 -- # set +x 00:05:02.200 15:47:13 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:02.458 [2024-11-29 15:47:13.628050] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:02.458 [2024-11-29 15:47:13.628160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56631 ] 00:05:02.458 [2024-11-29 15:47:13.775082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.716 [2024-11-29 15:47:13.944919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:02.716 [2024-11-29 15:47:13.945368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.716 [2024-11-29 15:47:13.945463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.096 15:47:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.096 15:47:15 -- common/autotest_common.sh@862 -- # return 0 00:05:04.096 15:47:15 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:04.096 15:47:15 -- spdkcli/tcp.sh@31 -- # socat_pid=56661 00:05:04.096 15:47:15 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:04.096 [ 00:05:04.096 "bdev_malloc_delete", 00:05:04.096 "bdev_malloc_create", 00:05:04.096 "bdev_null_resize", 00:05:04.096 "bdev_null_delete", 00:05:04.096 "bdev_null_create", 00:05:04.096 "bdev_nvme_cuse_unregister", 00:05:04.096 "bdev_nvme_cuse_register", 00:05:04.096 "bdev_opal_new_user", 00:05:04.096 "bdev_opal_set_lock_state", 00:05:04.096 "bdev_opal_delete", 00:05:04.096 "bdev_opal_get_info", 00:05:04.096 "bdev_opal_create", 00:05:04.096 "bdev_nvme_opal_revert", 00:05:04.096 "bdev_nvme_opal_init", 00:05:04.096 "bdev_nvme_send_cmd", 00:05:04.096 "bdev_nvme_get_path_iostat", 00:05:04.096 "bdev_nvme_get_mdns_discovery_info", 00:05:04.096 "bdev_nvme_stop_mdns_discovery", 00:05:04.096 "bdev_nvme_start_mdns_discovery", 00:05:04.096 "bdev_nvme_set_multipath_policy", 00:05:04.096 "bdev_nvme_set_preferred_path", 00:05:04.096 "bdev_nvme_get_io_paths", 00:05:04.096 "bdev_nvme_remove_error_injection", 00:05:04.096 "bdev_nvme_add_error_injection", 00:05:04.096 "bdev_nvme_get_discovery_info", 00:05:04.096 "bdev_nvme_stop_discovery", 00:05:04.096 "bdev_nvme_start_discovery", 00:05:04.096 "bdev_nvme_get_controller_health_info", 00:05:04.096 "bdev_nvme_disable_controller", 00:05:04.096 "bdev_nvme_enable_controller", 00:05:04.096 "bdev_nvme_reset_controller", 00:05:04.096 "bdev_nvme_get_transport_statistics", 00:05:04.096 "bdev_nvme_apply_firmware", 00:05:04.096 "bdev_nvme_detach_controller", 00:05:04.096 "bdev_nvme_get_controllers", 00:05:04.096 "bdev_nvme_attach_controller", 00:05:04.096 "bdev_nvme_set_hotplug", 00:05:04.096 "bdev_nvme_set_options", 00:05:04.096 "bdev_passthru_delete", 00:05:04.096 "bdev_passthru_create", 00:05:04.096 "bdev_lvol_grow_lvstore", 00:05:04.096 "bdev_lvol_get_lvols", 00:05:04.096 "bdev_lvol_get_lvstores", 00:05:04.096 "bdev_lvol_delete", 00:05:04.096 "bdev_lvol_set_read_only", 00:05:04.096 "bdev_lvol_resize", 00:05:04.096 "bdev_lvol_decouple_parent", 00:05:04.096 "bdev_lvol_inflate", 00:05:04.096 "bdev_lvol_rename", 00:05:04.096 "bdev_lvol_clone_bdev", 00:05:04.096 "bdev_lvol_clone", 00:05:04.096 "bdev_lvol_snapshot", 00:05:04.096 "bdev_lvol_create", 00:05:04.096 "bdev_lvol_delete_lvstore", 00:05:04.096 "bdev_lvol_rename_lvstore", 00:05:04.096 "bdev_lvol_create_lvstore", 00:05:04.096 "bdev_raid_set_options", 00:05:04.096 "bdev_raid_remove_base_bdev", 00:05:04.096 "bdev_raid_add_base_bdev", 00:05:04.096 "bdev_raid_delete", 00:05:04.096 "bdev_raid_create", 00:05:04.096 "bdev_raid_get_bdevs", 00:05:04.096 "bdev_error_inject_error", 00:05:04.096 "bdev_error_delete", 00:05:04.096 "bdev_error_create", 00:05:04.096 "bdev_split_delete", 00:05:04.096 "bdev_split_create", 00:05:04.096 "bdev_delay_delete", 00:05:04.096 "bdev_delay_create", 00:05:04.096 "bdev_delay_update_latency", 00:05:04.096 "bdev_zone_block_delete", 00:05:04.096 "bdev_zone_block_create", 00:05:04.096 "blobfs_create", 00:05:04.096 "blobfs_detect", 00:05:04.096 "blobfs_set_cache_size", 00:05:04.096 "bdev_xnvme_delete", 00:05:04.096 "bdev_xnvme_create", 00:05:04.096 "bdev_aio_delete", 00:05:04.096 "bdev_aio_rescan", 00:05:04.096 "bdev_aio_create", 00:05:04.096 "bdev_ftl_set_property", 00:05:04.096 "bdev_ftl_get_properties", 00:05:04.096 "bdev_ftl_get_stats", 00:05:04.096 "bdev_ftl_unmap", 00:05:04.096 "bdev_ftl_unload", 00:05:04.096 "bdev_ftl_delete", 00:05:04.096 "bdev_ftl_load", 00:05:04.096 "bdev_ftl_create", 00:05:04.096 "bdev_virtio_attach_controller", 00:05:04.096 "bdev_virtio_scsi_get_devices", 00:05:04.096 "bdev_virtio_detach_controller", 00:05:04.096 "bdev_virtio_blk_set_hotplug", 00:05:04.096 "bdev_iscsi_delete", 00:05:04.096 "bdev_iscsi_create", 00:05:04.096 "bdev_iscsi_set_options", 00:05:04.096 "accel_error_inject_error", 00:05:04.096 "ioat_scan_accel_module", 00:05:04.097 "dsa_scan_accel_module", 00:05:04.097 "iaa_scan_accel_module", 00:05:04.097 "iscsi_set_options", 00:05:04.097 "iscsi_get_auth_groups", 00:05:04.097 "iscsi_auth_group_remove_secret", 00:05:04.097 "iscsi_auth_group_add_secret", 00:05:04.097 "iscsi_delete_auth_group", 00:05:04.097 "iscsi_create_auth_group", 00:05:04.097 "iscsi_set_discovery_auth", 00:05:04.097 "iscsi_get_options", 00:05:04.097 "iscsi_target_node_request_logout", 00:05:04.097 "iscsi_target_node_set_redirect", 00:05:04.097 "iscsi_target_node_set_auth", 00:05:04.097 "iscsi_target_node_add_lun", 00:05:04.097 "iscsi_get_connections", 00:05:04.097 "iscsi_portal_group_set_auth", 00:05:04.097 "iscsi_start_portal_group", 00:05:04.097 "iscsi_delete_portal_group", 00:05:04.097 "iscsi_create_portal_group", 00:05:04.097 "iscsi_get_portal_groups", 00:05:04.097 "iscsi_delete_target_node", 00:05:04.097 "iscsi_target_node_remove_pg_ig_maps", 00:05:04.097 "iscsi_target_node_add_pg_ig_maps", 00:05:04.097 "iscsi_create_target_node", 00:05:04.097 "iscsi_get_target_nodes", 00:05:04.097 "iscsi_delete_initiator_group", 00:05:04.097 "iscsi_initiator_group_remove_initiators", 00:05:04.097 "iscsi_initiator_group_add_initiators", 00:05:04.097 "iscsi_create_initiator_group", 00:05:04.097 "iscsi_get_initiator_groups", 00:05:04.097 "nvmf_set_crdt", 00:05:04.097 "nvmf_set_config", 00:05:04.097 "nvmf_set_max_subsystems", 00:05:04.097 "nvmf_subsystem_get_listeners", 00:05:04.097 "nvmf_subsystem_get_qpairs", 00:05:04.097 "nvmf_subsystem_get_controllers", 00:05:04.097 "nvmf_get_stats", 00:05:04.097 "nvmf_get_transports", 00:05:04.097 "nvmf_create_transport", 00:05:04.097 "nvmf_get_targets", 00:05:04.097 "nvmf_delete_target", 00:05:04.097 "nvmf_create_target", 00:05:04.097 "nvmf_subsystem_allow_any_host", 00:05:04.097 "nvmf_subsystem_remove_host", 00:05:04.097 "nvmf_subsystem_add_host", 00:05:04.097 "nvmf_subsystem_remove_ns", 00:05:04.097 "nvmf_subsystem_add_ns", 00:05:04.097 "nvmf_subsystem_listener_set_ana_state", 00:05:04.097 "nvmf_discovery_get_referrals", 00:05:04.097 "nvmf_discovery_remove_referral", 00:05:04.097 "nvmf_discovery_add_referral", 00:05:04.097 "nvmf_subsystem_remove_listener", 00:05:04.097 "nvmf_subsystem_add_listener", 00:05:04.097 "nvmf_delete_subsystem", 00:05:04.097 "nvmf_create_subsystem", 00:05:04.097 "nvmf_get_subsystems", 00:05:04.097 "env_dpdk_get_mem_stats", 00:05:04.097 "nbd_get_disks", 00:05:04.097 "nbd_stop_disk", 00:05:04.097 "nbd_start_disk", 00:05:04.097 "ublk_recover_disk", 00:05:04.097 "ublk_get_disks", 00:05:04.097 "ublk_stop_disk", 00:05:04.097 "ublk_start_disk", 00:05:04.097 "ublk_destroy_target", 00:05:04.097 "ublk_create_target", 00:05:04.097 "virtio_blk_create_transport", 00:05:04.097 "virtio_blk_get_transports", 00:05:04.097 "vhost_controller_set_coalescing", 00:05:04.097 "vhost_get_controllers", 00:05:04.097 "vhost_delete_controller", 00:05:04.097 "vhost_create_blk_controller", 00:05:04.097 "vhost_scsi_controller_remove_target", 00:05:04.097 "vhost_scsi_controller_add_target", 00:05:04.097 "vhost_start_scsi_controller", 00:05:04.097 "vhost_create_scsi_controller", 00:05:04.097 "thread_set_cpumask", 00:05:04.097 "framework_get_scheduler", 00:05:04.097 "framework_set_scheduler", 00:05:04.097 "framework_get_reactors", 00:05:04.097 "thread_get_io_channels", 00:05:04.097 "thread_get_pollers", 00:05:04.097 "thread_get_stats", 00:05:04.097 "framework_monitor_context_switch", 00:05:04.097 "spdk_kill_instance", 00:05:04.097 "log_enable_timestamps", 00:05:04.097 "log_get_flags", 00:05:04.097 "log_clear_flag", 00:05:04.097 "log_set_flag", 00:05:04.097 "log_get_level", 00:05:04.097 "log_set_level", 00:05:04.097 "log_get_print_level", 00:05:04.097 "log_set_print_level", 00:05:04.097 "framework_enable_cpumask_locks", 00:05:04.097 "framework_disable_cpumask_locks", 00:05:04.097 "framework_wait_init", 00:05:04.097 "framework_start_init", 00:05:04.097 "scsi_get_devices", 00:05:04.097 "bdev_get_histogram", 00:05:04.097 "bdev_enable_histogram", 00:05:04.097 "bdev_set_qos_limit", 00:05:04.097 "bdev_set_qd_sampling_period", 00:05:04.097 "bdev_get_bdevs", 00:05:04.097 "bdev_reset_iostat", 00:05:04.097 "bdev_get_iostat", 00:05:04.097 "bdev_examine", 00:05:04.097 "bdev_wait_for_examine", 00:05:04.097 "bdev_set_options", 00:05:04.097 "notify_get_notifications", 00:05:04.097 "notify_get_types", 00:05:04.097 "accel_get_stats", 00:05:04.097 "accel_set_options", 00:05:04.097 "accel_set_driver", 00:05:04.097 "accel_crypto_key_destroy", 00:05:04.097 "accel_crypto_keys_get", 00:05:04.097 "accel_crypto_key_create", 00:05:04.097 "accel_assign_opc", 00:05:04.097 "accel_get_module_info", 00:05:04.097 "accel_get_opc_assignments", 00:05:04.097 "vmd_rescan", 00:05:04.097 "vmd_remove_device", 00:05:04.097 "vmd_enable", 00:05:04.097 "sock_set_default_impl", 00:05:04.097 "sock_impl_set_options", 00:05:04.097 "sock_impl_get_options", 00:05:04.097 "iobuf_get_stats", 00:05:04.097 "iobuf_set_options", 00:05:04.097 "framework_get_pci_devices", 00:05:04.097 "framework_get_config", 00:05:04.097 "framework_get_subsystems", 00:05:04.097 "trace_get_info", 00:05:04.097 "trace_get_tpoint_group_mask", 00:05:04.097 "trace_disable_tpoint_group", 00:05:04.097 "trace_enable_tpoint_group", 00:05:04.097 "trace_clear_tpoint_mask", 00:05:04.097 "trace_set_tpoint_mask", 00:05:04.097 "spdk_get_version", 00:05:04.097 "rpc_get_methods" 00:05:04.097 ] 00:05:04.097 15:47:15 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:04.097 15:47:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.097 15:47:15 -- common/autotest_common.sh@10 -- # set +x 00:05:04.097 15:47:15 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:04.097 15:47:15 -- spdkcli/tcp.sh@38 -- # killprocess 56631 00:05:04.097 15:47:15 -- common/autotest_common.sh@936 -- # '[' -z 56631 ']' 00:05:04.097 15:47:15 -- common/autotest_common.sh@940 -- # kill -0 56631 00:05:04.097 15:47:15 -- common/autotest_common.sh@941 -- # uname 00:05:04.097 15:47:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.097 15:47:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56631 00:05:04.097 15:47:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.097 15:47:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.097 15:47:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56631' 00:05:04.097 killing process with pid 56631 00:05:04.097 15:47:15 -- common/autotest_common.sh@955 -- # kill 56631 00:05:04.097 15:47:15 -- common/autotest_common.sh@960 -- # wait 56631 00:05:05.470 00:05:05.470 real 0m3.148s 00:05:05.470 user 0m5.750s 00:05:05.470 sys 0m0.440s 00:05:05.470 15:47:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.470 ************************************ 00:05:05.470 END TEST spdkcli_tcp 00:05:05.470 ************************************ 00:05:05.470 15:47:16 -- common/autotest_common.sh@10 -- # set +x 00:05:05.470 15:47:16 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:05.470 15:47:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.470 15:47:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.470 15:47:16 -- common/autotest_common.sh@10 -- # set +x 00:05:05.470 ************************************ 00:05:05.470 START TEST dpdk_mem_utility 00:05:05.470 ************************************ 00:05:05.470 15:47:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:05.470 * Looking for test storage... 00:05:05.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:05.470 15:47:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.470 15:47:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.470 15:47:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.470 15:47:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.470 15:47:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.470 15:47:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.470 15:47:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.470 15:47:16 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.470 15:47:16 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.470 15:47:16 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.470 15:47:16 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.470 15:47:16 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.470 15:47:16 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.470 15:47:16 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.470 15:47:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.470 15:47:16 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.470 15:47:16 -- scripts/common.sh@344 -- # : 1 00:05:05.470 15:47:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.470 15:47:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.470 15:47:16 -- scripts/common.sh@364 -- # decimal 1 00:05:05.470 15:47:16 -- scripts/common.sh@352 -- # local d=1 00:05:05.470 15:47:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.470 15:47:16 -- scripts/common.sh@354 -- # echo 1 00:05:05.470 15:47:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.470 15:47:16 -- scripts/common.sh@365 -- # decimal 2 00:05:05.470 15:47:16 -- scripts/common.sh@352 -- # local d=2 00:05:05.470 15:47:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.470 15:47:16 -- scripts/common.sh@354 -- # echo 2 00:05:05.470 15:47:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.470 15:47:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.470 15:47:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.470 15:47:16 -- scripts/common.sh@367 -- # return 0 00:05:05.470 15:47:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.470 15:47:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.470 --rc genhtml_branch_coverage=1 00:05:05.470 --rc genhtml_function_coverage=1 00:05:05.470 --rc genhtml_legend=1 00:05:05.470 --rc geninfo_all_blocks=1 00:05:05.470 --rc geninfo_unexecuted_blocks=1 00:05:05.470 00:05:05.470 ' 00:05:05.470 15:47:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.470 --rc genhtml_branch_coverage=1 00:05:05.470 --rc genhtml_function_coverage=1 00:05:05.470 --rc genhtml_legend=1 00:05:05.470 --rc geninfo_all_blocks=1 00:05:05.470 --rc geninfo_unexecuted_blocks=1 00:05:05.470 00:05:05.470 ' 00:05:05.470 15:47:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.470 --rc genhtml_branch_coverage=1 00:05:05.470 --rc genhtml_function_coverage=1 00:05:05.470 --rc genhtml_legend=1 00:05:05.470 --rc geninfo_all_blocks=1 00:05:05.470 --rc geninfo_unexecuted_blocks=1 00:05:05.470 00:05:05.470 ' 00:05:05.470 15:47:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.470 --rc genhtml_branch_coverage=1 00:05:05.470 --rc genhtml_function_coverage=1 00:05:05.470 --rc genhtml_legend=1 00:05:05.470 --rc geninfo_all_blocks=1 00:05:05.470 --rc geninfo_unexecuted_blocks=1 00:05:05.470 00:05:05.470 ' 00:05:05.470 15:47:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:05.470 15:47:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56743 00:05:05.470 15:47:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.470 15:47:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56743 00:05:05.470 15:47:16 -- common/autotest_common.sh@829 -- # '[' -z 56743 ']' 00:05:05.470 15:47:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.470 15:47:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.470 15:47:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.470 15:47:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.470 15:47:16 -- common/autotest_common.sh@10 -- # set +x 00:05:05.470 [2024-11-29 15:47:16.820421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:05.471 [2024-11-29 15:47:16.820539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56743 ] 00:05:05.729 [2024-11-29 15:47:16.967965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.729 [2024-11-29 15:47:17.136373] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:05.729 [2024-11-29 15:47:17.136581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.105 15:47:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.105 15:47:18 -- common/autotest_common.sh@862 -- # return 0 00:05:07.105 15:47:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:07.105 15:47:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:07.105 15:47:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.105 15:47:18 -- common/autotest_common.sh@10 -- # set +x 00:05:07.105 { 00:05:07.105 "filename": "/tmp/spdk_mem_dump.txt" 00:05:07.105 } 00:05:07.105 15:47:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.105 15:47:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:07.105 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:07.105 1 heaps totaling size 820.000000 MiB 00:05:07.105 size: 820.000000 MiB heap id: 0 00:05:07.105 end heaps---------- 00:05:07.105 8 mempools totaling size 598.116089 MiB 00:05:07.105 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:07.105 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:07.105 size: 84.521057 MiB name: bdev_io_56743 00:05:07.105 size: 51.011292 MiB name: evtpool_56743 00:05:07.105 size: 50.003479 MiB name: msgpool_56743 00:05:07.105 size: 21.763794 MiB name: PDU_Pool 00:05:07.105 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:07.105 size: 0.026123 MiB name: Session_Pool 00:05:07.105 end mempools------- 00:05:07.105 6 memzones totaling size 4.142822 MiB 00:05:07.105 size: 1.000366 MiB name: RG_ring_0_56743 00:05:07.105 size: 1.000366 MiB name: RG_ring_1_56743 00:05:07.105 size: 1.000366 MiB name: RG_ring_4_56743 00:05:07.105 size: 1.000366 MiB name: RG_ring_5_56743 00:05:07.105 size: 0.125366 MiB name: RG_ring_2_56743 00:05:07.105 size: 0.015991 MiB name: RG_ring_3_56743 00:05:07.105 end memzones------- 00:05:07.105 15:47:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:07.105 heap id: 0 total size: 820.000000 MiB number of busy elements: 302 number of free elements: 18 00:05:07.105 list of free elements. size: 18.451050 MiB 00:05:07.105 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:07.105 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:07.105 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:07.105 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:07.105 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:07.105 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:07.105 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:07.105 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:07.105 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:07.105 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:07.105 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:07.105 element at address: 0x200000200000 with size: 0.829224 MiB 00:05:07.105 element at address: 0x20001b000000 with size: 0.563660 MiB 00:05:07.105 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:07.105 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:07.105 element at address: 0x200013800000 with size: 0.467651 MiB 00:05:07.105 element at address: 0x200028400000 with size: 0.391174 MiB 00:05:07.105 element at address: 0x200003a00000 with size: 0.352234 MiB 00:05:07.105 list of standard malloc elements. size: 199.284546 MiB 00:05:07.105 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:07.105 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:07.105 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:07.105 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:07.105 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:07.105 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:07.105 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:07.105 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:07.105 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:07.105 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:07.105 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:07.105 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:07.105 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200013877b80 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:07.106 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:07.107 element at address: 0x200028464240 with size: 0.000244 MiB 00:05:07.107 element at address: 0x200028464340 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846b000 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:07.107 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:07.107 list of memzone associated elements. size: 602.264404 MiB 00:05:07.107 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:07.107 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:07.107 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:07.107 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:07.107 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:07.107 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56743_0 00:05:07.107 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:07.107 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56743_0 00:05:07.107 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:07.107 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56743_0 00:05:07.107 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:07.107 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:07.107 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:07.107 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:07.107 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:07.107 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56743 00:05:07.107 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:07.107 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56743 00:05:07.107 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:07.107 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56743 00:05:07.107 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:07.107 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:07.107 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:07.107 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:07.108 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:07.108 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:07.108 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:07.108 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:07.108 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:07.108 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56743 00:05:07.108 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:07.108 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56743 00:05:07.108 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:07.108 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56743 00:05:07.108 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:07.108 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56743 00:05:07.108 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:07.108 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56743 00:05:07.108 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:07.108 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:07.108 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:07.108 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:07.108 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:07.108 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:07.108 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:07.108 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56743 00:05:07.108 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:07.108 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:07.108 element at address: 0x200028464440 with size: 0.023804 MiB 00:05:07.108 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:07.108 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:07.108 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56743 00:05:07.108 element at address: 0x20002846a5c0 with size: 0.002502 MiB 00:05:07.108 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:07.108 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:07.108 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56743 00:05:07.108 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:07.108 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56743 00:05:07.108 element at address: 0x20002846b100 with size: 0.000366 MiB 00:05:07.108 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:07.108 15:47:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:07.108 15:47:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56743 00:05:07.108 15:47:18 -- common/autotest_common.sh@936 -- # '[' -z 56743 ']' 00:05:07.108 15:47:18 -- common/autotest_common.sh@940 -- # kill -0 56743 00:05:07.108 15:47:18 -- common/autotest_common.sh@941 -- # uname 00:05:07.108 15:47:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:07.108 15:47:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56743 00:05:07.108 15:47:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:07.108 15:47:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:07.108 killing process with pid 56743 00:05:07.108 15:47:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56743' 00:05:07.108 15:47:18 -- common/autotest_common.sh@955 -- # kill 56743 00:05:07.108 15:47:18 -- common/autotest_common.sh@960 -- # wait 56743 00:05:08.549 00:05:08.549 real 0m3.233s 00:05:08.549 user 0m3.422s 00:05:08.549 sys 0m0.404s 00:05:08.549 15:47:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.549 15:47:19 -- common/autotest_common.sh@10 -- # set +x 00:05:08.549 ************************************ 00:05:08.549 END TEST dpdk_mem_utility 00:05:08.549 ************************************ 00:05:08.549 15:47:19 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:08.549 15:47:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.549 15:47:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.549 15:47:19 -- common/autotest_common.sh@10 -- # set +x 00:05:08.549 ************************************ 00:05:08.549 START TEST event 00:05:08.549 ************************************ 00:05:08.549 15:47:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:08.549 * Looking for test storage... 00:05:08.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:08.549 15:47:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:08.549 15:47:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:08.549 15:47:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:08.807 15:47:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:08.807 15:47:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:08.807 15:47:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:08.807 15:47:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:08.807 15:47:19 -- scripts/common.sh@335 -- # IFS=.-: 00:05:08.807 15:47:20 -- scripts/common.sh@335 -- # read -ra ver1 00:05:08.807 15:47:20 -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.807 15:47:20 -- scripts/common.sh@336 -- # read -ra ver2 00:05:08.807 15:47:20 -- scripts/common.sh@337 -- # local 'op=<' 00:05:08.807 15:47:20 -- scripts/common.sh@339 -- # ver1_l=2 00:05:08.807 15:47:20 -- scripts/common.sh@340 -- # ver2_l=1 00:05:08.807 15:47:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:08.807 15:47:20 -- scripts/common.sh@343 -- # case "$op" in 00:05:08.807 15:47:20 -- scripts/common.sh@344 -- # : 1 00:05:08.807 15:47:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:08.807 15:47:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.807 15:47:20 -- scripts/common.sh@364 -- # decimal 1 00:05:08.807 15:47:20 -- scripts/common.sh@352 -- # local d=1 00:05:08.807 15:47:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.807 15:47:20 -- scripts/common.sh@354 -- # echo 1 00:05:08.807 15:47:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:08.807 15:47:20 -- scripts/common.sh@365 -- # decimal 2 00:05:08.807 15:47:20 -- scripts/common.sh@352 -- # local d=2 00:05:08.807 15:47:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.808 15:47:20 -- scripts/common.sh@354 -- # echo 2 00:05:08.808 15:47:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:08.808 15:47:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:08.808 15:47:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:08.808 15:47:20 -- scripts/common.sh@367 -- # return 0 00:05:08.808 15:47:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.808 15:47:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:08.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.808 --rc genhtml_branch_coverage=1 00:05:08.808 --rc genhtml_function_coverage=1 00:05:08.808 --rc genhtml_legend=1 00:05:08.808 --rc geninfo_all_blocks=1 00:05:08.808 --rc geninfo_unexecuted_blocks=1 00:05:08.808 00:05:08.808 ' 00:05:08.808 15:47:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:08.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.808 --rc genhtml_branch_coverage=1 00:05:08.808 --rc genhtml_function_coverage=1 00:05:08.808 --rc genhtml_legend=1 00:05:08.808 --rc geninfo_all_blocks=1 00:05:08.808 --rc geninfo_unexecuted_blocks=1 00:05:08.808 00:05:08.808 ' 00:05:08.808 15:47:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:08.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.808 --rc genhtml_branch_coverage=1 00:05:08.808 --rc genhtml_function_coverage=1 00:05:08.808 --rc genhtml_legend=1 00:05:08.808 --rc geninfo_all_blocks=1 00:05:08.808 --rc geninfo_unexecuted_blocks=1 00:05:08.808 00:05:08.808 ' 00:05:08.808 15:47:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:08.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.808 --rc genhtml_branch_coverage=1 00:05:08.808 --rc genhtml_function_coverage=1 00:05:08.808 --rc genhtml_legend=1 00:05:08.808 --rc geninfo_all_blocks=1 00:05:08.808 --rc geninfo_unexecuted_blocks=1 00:05:08.808 00:05:08.808 ' 00:05:08.808 15:47:20 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:08.808 15:47:20 -- bdev/nbd_common.sh@6 -- # set -e 00:05:08.808 15:47:20 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.808 15:47:20 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:08.808 15:47:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.808 15:47:20 -- common/autotest_common.sh@10 -- # set +x 00:05:08.808 ************************************ 00:05:08.808 START TEST event_perf 00:05:08.808 ************************************ 00:05:08.808 15:47:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.808 Running I/O for 1 seconds...[2024-11-29 15:47:20.060389] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:08.808 [2024-11-29 15:47:20.060502] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56852 ] 00:05:08.808 [2024-11-29 15:47:20.208363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:09.066 [2024-11-29 15:47:20.348029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.066 [2024-11-29 15:47:20.348168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.066 Running I/O for 1 seconds...[2024-11-29 15:47:20.348598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.066 [2024-11-29 15:47:20.348630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.444 00:05:10.444 lcore 0: 162486 00:05:10.444 lcore 1: 162488 00:05:10.444 lcore 2: 162487 00:05:10.444 lcore 3: 162490 00:05:10.444 done. 00:05:10.444 00:05:10.444 real 0m1.532s 00:05:10.444 user 0m4.322s 00:05:10.444 sys 0m0.092s 00:05:10.444 15:47:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.444 15:47:21 -- common/autotest_common.sh@10 -- # set +x 00:05:10.444 ************************************ 00:05:10.444 END TEST event_perf 00:05:10.444 ************************************ 00:05:10.444 15:47:21 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:10.444 15:47:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:10.444 15:47:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.444 15:47:21 -- common/autotest_common.sh@10 -- # set +x 00:05:10.444 ************************************ 00:05:10.444 START TEST event_reactor 00:05:10.444 ************************************ 00:05:10.444 15:47:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:10.444 [2024-11-29 15:47:21.631764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:10.444 [2024-11-29 15:47:21.631876] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56886 ] 00:05:10.444 [2024-11-29 15:47:21.779068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.707 [2024-11-29 15:47:21.919985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.088 test_start 00:05:12.088 oneshot 00:05:12.088 tick 100 00:05:12.088 tick 100 00:05:12.088 tick 250 00:05:12.088 tick 100 00:05:12.088 tick 100 00:05:12.088 tick 250 00:05:12.088 tick 100 00:05:12.088 tick 500 00:05:12.088 tick 100 00:05:12.088 tick 100 00:05:12.088 tick 250 00:05:12.088 tick 100 00:05:12.088 tick 100 00:05:12.088 test_end 00:05:12.088 00:05:12.088 real 0m1.527s 00:05:12.088 user 0m1.353s 00:05:12.088 sys 0m0.067s 00:05:12.088 15:47:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.088 15:47:23 -- common/autotest_common.sh@10 -- # set +x 00:05:12.088 ************************************ 00:05:12.088 END TEST event_reactor 00:05:12.088 ************************************ 00:05:12.088 15:47:23 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:12.088 15:47:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:12.088 15:47:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.088 15:47:23 -- common/autotest_common.sh@10 -- # set +x 00:05:12.088 ************************************ 00:05:12.088 START TEST event_reactor_perf 00:05:12.088 ************************************ 00:05:12.088 15:47:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:12.088 [2024-11-29 15:47:23.192880] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:12.088 [2024-11-29 15:47:23.192955] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56928 ] 00:05:12.088 [2024-11-29 15:47:23.337627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.346 [2024-11-29 15:47:23.520610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.726 test_start 00:05:13.726 test_end 00:05:13.726 Performance: 314133 events per second 00:05:13.726 00:05:13.726 real 0m1.614s 00:05:13.726 user 0m1.439s 00:05:13.726 sys 0m0.067s 00:05:13.726 15:47:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.726 15:47:24 -- common/autotest_common.sh@10 -- # set +x 00:05:13.726 ************************************ 00:05:13.726 END TEST event_reactor_perf 00:05:13.726 ************************************ 00:05:13.726 15:47:24 -- event/event.sh@49 -- # uname -s 00:05:13.726 15:47:24 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:13.726 15:47:24 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:13.726 15:47:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.726 15:47:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.726 15:47:24 -- common/autotest_common.sh@10 -- # set +x 00:05:13.726 ************************************ 00:05:13.726 START TEST event_scheduler 00:05:13.726 ************************************ 00:05:13.726 15:47:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:13.726 * Looking for test storage... 00:05:13.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:13.726 15:47:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:13.726 15:47:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:13.726 15:47:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:13.726 15:47:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:13.726 15:47:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:13.726 15:47:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:13.726 15:47:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:13.726 15:47:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:13.726 15:47:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:13.726 15:47:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.726 15:47:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:13.726 15:47:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:13.726 15:47:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:13.726 15:47:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:13.726 15:47:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:13.726 15:47:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:13.726 15:47:24 -- scripts/common.sh@344 -- # : 1 00:05:13.726 15:47:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:13.726 15:47:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.726 15:47:24 -- scripts/common.sh@364 -- # decimal 1 00:05:13.726 15:47:24 -- scripts/common.sh@352 -- # local d=1 00:05:13.726 15:47:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.726 15:47:24 -- scripts/common.sh@354 -- # echo 1 00:05:13.726 15:47:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:13.726 15:47:24 -- scripts/common.sh@365 -- # decimal 2 00:05:13.726 15:47:24 -- scripts/common.sh@352 -- # local d=2 00:05:13.726 15:47:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.726 15:47:24 -- scripts/common.sh@354 -- # echo 2 00:05:13.726 15:47:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:13.726 15:47:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:13.726 15:47:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:13.726 15:47:24 -- scripts/common.sh@367 -- # return 0 00:05:13.726 15:47:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.726 15:47:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:13.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.726 --rc genhtml_branch_coverage=1 00:05:13.726 --rc genhtml_function_coverage=1 00:05:13.726 --rc genhtml_legend=1 00:05:13.726 --rc geninfo_all_blocks=1 00:05:13.726 --rc geninfo_unexecuted_blocks=1 00:05:13.726 00:05:13.726 ' 00:05:13.726 15:47:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:13.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.726 --rc genhtml_branch_coverage=1 00:05:13.726 --rc genhtml_function_coverage=1 00:05:13.726 --rc genhtml_legend=1 00:05:13.726 --rc geninfo_all_blocks=1 00:05:13.726 --rc geninfo_unexecuted_blocks=1 00:05:13.726 00:05:13.726 ' 00:05:13.726 15:47:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:13.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.726 --rc genhtml_branch_coverage=1 00:05:13.726 --rc genhtml_function_coverage=1 00:05:13.726 --rc genhtml_legend=1 00:05:13.726 --rc geninfo_all_blocks=1 00:05:13.726 --rc geninfo_unexecuted_blocks=1 00:05:13.726 00:05:13.726 ' 00:05:13.726 15:47:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:13.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.726 --rc genhtml_branch_coverage=1 00:05:13.726 --rc genhtml_function_coverage=1 00:05:13.726 --rc genhtml_legend=1 00:05:13.726 --rc geninfo_all_blocks=1 00:05:13.726 --rc geninfo_unexecuted_blocks=1 00:05:13.726 00:05:13.726 ' 00:05:13.726 15:47:24 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:13.726 15:47:24 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56992 00:05:13.726 15:47:24 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.726 15:47:24 -- scheduler/scheduler.sh@37 -- # waitforlisten 56992 00:05:13.726 15:47:24 -- common/autotest_common.sh@829 -- # '[' -z 56992 ']' 00:05:13.726 15:47:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.726 15:47:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.726 15:47:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.726 15:47:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.726 15:47:24 -- common/autotest_common.sh@10 -- # set +x 00:05:13.726 15:47:24 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:13.726 [2024-11-29 15:47:25.042056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:13.726 [2024-11-29 15:47:25.042150] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56992 ] 00:05:13.983 [2024-11-29 15:47:25.188732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.983 [2024-11-29 15:47:25.389863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.983 [2024-11-29 15:47:25.390081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.983 [2024-11-29 15:47:25.390331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.983 [2024-11-29 15:47:25.390348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.549 15:47:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.549 15:47:25 -- common/autotest_common.sh@862 -- # return 0 00:05:14.550 15:47:25 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:14.550 15:47:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.550 15:47:25 -- common/autotest_common.sh@10 -- # set +x 00:05:14.550 POWER: Env isn't set yet! 00:05:14.550 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:14.550 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.550 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.550 POWER: Attempting to initialise PSTAT power management... 00:05:14.550 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.550 POWER: Cannot set governor of lcore 0 to performance 00:05:14.550 POWER: Attempting to initialise AMD PSTATE power management... 00:05:14.550 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.550 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.550 POWER: Attempting to initialise CPPC power management... 00:05:14.550 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.550 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.550 POWER: Attempting to initialise VM power management... 00:05:14.550 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:14.550 POWER: Unable to set Power Management Environment for lcore 0 00:05:14.550 [2024-11-29 15:47:25.864324] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:14.550 [2024-11-29 15:47:25.864375] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:14.550 [2024-11-29 15:47:25.864418] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:14.550 [2024-11-29 15:47:25.864437] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:14.550 [2024-11-29 15:47:25.864448] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:14.550 [2024-11-29 15:47:25.864455] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:14.550 15:47:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.550 15:47:25 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:14.550 15:47:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.550 15:47:25 -- common/autotest_common.sh@10 -- # set +x 00:05:14.809 [2024-11-29 15:47:26.104145] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:14.809 15:47:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:14.809 15:47:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.809 15:47:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.809 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.809 ************************************ 00:05:14.809 START TEST scheduler_create_thread 00:05:14.809 ************************************ 00:05:14.809 15:47:26 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:14.809 15:47:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.809 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.809 2 00:05:14.809 15:47:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:14.809 15:47:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.809 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.809 3 00:05:14.809 15:47:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:14.809 15:47:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.809 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.809 4 00:05:14.809 15:47:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:14.809 15:47:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.809 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.809 5 00:05:14.809 15:47:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:14.809 15:47:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.809 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.809 6 00:05:14.809 15:47:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:14.809 15:47:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.809 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.809 7 00:05:14.809 15:47:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:14.809 15:47:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.809 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.809 8 00:05:14.809 15:47:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:14.809 15:47:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.809 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.809 9 00:05:14.809 15:47:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:14.809 15:47:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.809 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.809 10 00:05:14.809 15:47:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:14.809 15:47:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.809 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.809 15:47:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:14.809 15:47:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.809 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.809 15:47:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:14.809 15:47:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.809 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.809 15:47:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:14.809 15:47:26 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:14.809 15:47:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.809 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:16.182 15:47:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.182 00:05:16.182 real 0m1.174s 00:05:16.182 user 0m0.011s 00:05:16.182 sys 0m0.005s 00:05:16.182 15:47:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.182 15:47:27 -- common/autotest_common.sh@10 -- # set +x 00:05:16.182 ************************************ 00:05:16.182 END TEST scheduler_create_thread 00:05:16.182 ************************************ 00:05:16.182 15:47:27 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:16.182 15:47:27 -- scheduler/scheduler.sh@46 -- # killprocess 56992 00:05:16.182 15:47:27 -- common/autotest_common.sh@936 -- # '[' -z 56992 ']' 00:05:16.182 15:47:27 -- common/autotest_common.sh@940 -- # kill -0 56992 00:05:16.182 15:47:27 -- common/autotest_common.sh@941 -- # uname 00:05:16.182 15:47:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:16.182 15:47:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56992 00:05:16.182 15:47:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:16.182 15:47:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:16.182 15:47:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56992' 00:05:16.182 killing process with pid 56992 00:05:16.182 15:47:27 -- common/autotest_common.sh@955 -- # kill 56992 00:05:16.182 15:47:27 -- common/autotest_common.sh@960 -- # wait 56992 00:05:16.440 [2024-11-29 15:47:27.769342] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:17.375 00:05:17.375 real 0m3.731s 00:05:17.375 user 0m5.682s 00:05:17.375 sys 0m0.365s 00:05:17.375 15:47:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.375 15:47:28 -- common/autotest_common.sh@10 -- # set +x 00:05:17.375 ************************************ 00:05:17.375 END TEST event_scheduler 00:05:17.375 ************************************ 00:05:17.375 15:47:28 -- event/event.sh@51 -- # modprobe -n nbd 00:05:17.375 15:47:28 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:17.375 15:47:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.375 15:47:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.375 15:47:28 -- common/autotest_common.sh@10 -- # set +x 00:05:17.375 ************************************ 00:05:17.375 START TEST app_repeat 00:05:17.375 ************************************ 00:05:17.375 15:47:28 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:17.375 15:47:28 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.375 15:47:28 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.375 15:47:28 -- event/event.sh@13 -- # local nbd_list 00:05:17.375 15:47:28 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.375 15:47:28 -- event/event.sh@14 -- # local bdev_list 00:05:17.375 15:47:28 -- event/event.sh@15 -- # local repeat_times=4 00:05:17.375 15:47:28 -- event/event.sh@17 -- # modprobe nbd 00:05:17.375 15:47:28 -- event/event.sh@19 -- # repeat_pid=57087 00:05:17.375 15:47:28 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.375 Process app_repeat pid: 57087 00:05:17.375 15:47:28 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57087' 00:05:17.375 spdk_app_start Round 0 00:05:17.375 15:47:28 -- event/event.sh@23 -- # for i in {0..2} 00:05:17.375 15:47:28 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:17.375 15:47:28 -- event/event.sh@25 -- # waitforlisten 57087 /var/tmp/spdk-nbd.sock 00:05:17.375 15:47:28 -- common/autotest_common.sh@829 -- # '[' -z 57087 ']' 00:05:17.375 15:47:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.375 15:47:28 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:17.375 15:47:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.375 15:47:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.375 15:47:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.375 15:47:28 -- common/autotest_common.sh@10 -- # set +x 00:05:17.375 [2024-11-29 15:47:28.659835] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:17.375 [2024-11-29 15:47:28.659947] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57087 ] 00:05:17.634 [2024-11-29 15:47:28.804174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.634 [2024-11-29 15:47:28.976863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.634 [2024-11-29 15:47:28.976877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.201 15:47:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.201 15:47:29 -- common/autotest_common.sh@862 -- # return 0 00:05:18.201 15:47:29 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.459 Malloc0 00:05:18.459 15:47:29 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.459 Malloc1 00:05:18.459 15:47:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@12 -- # local i 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.459 15:47:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.717 /dev/nbd0 00:05:18.717 15:47:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.717 15:47:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.717 15:47:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:18.717 15:47:30 -- common/autotest_common.sh@867 -- # local i 00:05:18.717 15:47:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:18.717 15:47:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:18.717 15:47:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:18.717 15:47:30 -- common/autotest_common.sh@871 -- # break 00:05:18.717 15:47:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:18.717 15:47:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:18.717 15:47:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.717 1+0 records in 00:05:18.717 1+0 records out 00:05:18.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376768 s, 10.9 MB/s 00:05:18.717 15:47:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.717 15:47:30 -- common/autotest_common.sh@884 -- # size=4096 00:05:18.717 15:47:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.717 15:47:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:18.717 15:47:30 -- common/autotest_common.sh@887 -- # return 0 00:05:18.717 15:47:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.717 15:47:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.717 15:47:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.006 /dev/nbd1 00:05:19.006 15:47:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.006 15:47:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.006 15:47:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:19.006 15:47:30 -- common/autotest_common.sh@867 -- # local i 00:05:19.006 15:47:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.006 15:47:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.006 15:47:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:19.006 15:47:30 -- common/autotest_common.sh@871 -- # break 00:05:19.006 15:47:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.006 15:47:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.006 15:47:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.006 1+0 records in 00:05:19.006 1+0 records out 00:05:19.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000147487 s, 27.8 MB/s 00:05:19.006 15:47:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.006 15:47:30 -- common/autotest_common.sh@884 -- # size=4096 00:05:19.006 15:47:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.006 15:47:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.006 15:47:30 -- common/autotest_common.sh@887 -- # return 0 00:05:19.006 15:47:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.006 15:47:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.006 15:47:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.006 15:47:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.006 15:47:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.006 15:47:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:19.006 { 00:05:19.006 "nbd_device": "/dev/nbd0", 00:05:19.006 "bdev_name": "Malloc0" 00:05:19.006 }, 00:05:19.006 { 00:05:19.006 "nbd_device": "/dev/nbd1", 00:05:19.006 "bdev_name": "Malloc1" 00:05:19.006 } 00:05:19.006 ]' 00:05:19.006 15:47:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:19.006 { 00:05:19.006 "nbd_device": "/dev/nbd0", 00:05:19.006 "bdev_name": "Malloc0" 00:05:19.006 }, 00:05:19.006 { 00:05:19.006 "nbd_device": "/dev/nbd1", 00:05:19.006 "bdev_name": "Malloc1" 00:05:19.006 } 00:05:19.006 ]' 00:05:19.006 15:47:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:19.265 /dev/nbd1' 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:19.265 /dev/nbd1' 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@65 -- # count=2 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@95 -- # count=2 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.265 256+0 records in 00:05:19.265 256+0 records out 00:05:19.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00976313 s, 107 MB/s 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.265 256+0 records in 00:05:19.265 256+0 records out 00:05:19.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225189 s, 46.6 MB/s 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.265 256+0 records in 00:05:19.265 256+0 records out 00:05:19.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228151 s, 46.0 MB/s 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@51 -- # local i 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@41 -- # break 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.265 15:47:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.524 15:47:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.524 15:47:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.524 15:47:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.524 15:47:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.524 15:47:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.524 15:47:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.524 15:47:30 -- bdev/nbd_common.sh@41 -- # break 00:05:19.524 15:47:30 -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.524 15:47:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.524 15:47:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.524 15:47:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.783 15:47:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.783 15:47:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.783 15:47:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.783 15:47:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.783 15:47:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.783 15:47:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.783 15:47:31 -- bdev/nbd_common.sh@65 -- # true 00:05:19.783 15:47:31 -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.783 15:47:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.783 15:47:31 -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.783 15:47:31 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.783 15:47:31 -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.783 15:47:31 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.041 15:47:31 -- event/event.sh@35 -- # sleep 3 00:05:20.976 [2024-11-29 15:47:32.081752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.976 [2024-11-29 15:47:32.211364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.976 [2024-11-29 15:47:32.211440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.976 [2024-11-29 15:47:32.317324] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.976 [2024-11-29 15:47:32.317362] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.511 spdk_app_start Round 1 00:05:23.511 15:47:34 -- event/event.sh@23 -- # for i in {0..2} 00:05:23.511 15:47:34 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:23.511 15:47:34 -- event/event.sh@25 -- # waitforlisten 57087 /var/tmp/spdk-nbd.sock 00:05:23.511 15:47:34 -- common/autotest_common.sh@829 -- # '[' -z 57087 ']' 00:05:23.511 15:47:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.511 15:47:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.511 15:47:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.511 15:47:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.511 15:47:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.511 15:47:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.511 15:47:34 -- common/autotest_common.sh@862 -- # return 0 00:05:23.511 15:47:34 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.511 Malloc0 00:05:23.511 15:47:34 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.769 Malloc1 00:05:23.770 15:47:35 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@12 -- # local i 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.770 15:47:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.027 /dev/nbd0 00:05:24.027 15:47:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.027 15:47:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.027 15:47:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:24.027 15:47:35 -- common/autotest_common.sh@867 -- # local i 00:05:24.027 15:47:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.027 15:47:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.027 15:47:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:24.027 15:47:35 -- common/autotest_common.sh@871 -- # break 00:05:24.027 15:47:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.027 15:47:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.027 15:47:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.027 1+0 records in 00:05:24.027 1+0 records out 00:05:24.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295795 s, 13.8 MB/s 00:05:24.027 15:47:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.027 15:47:35 -- common/autotest_common.sh@884 -- # size=4096 00:05:24.027 15:47:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.027 15:47:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.027 15:47:35 -- common/autotest_common.sh@887 -- # return 0 00:05:24.027 15:47:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.027 15:47:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.027 15:47:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.027 /dev/nbd1 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.284 15:47:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:24.284 15:47:35 -- common/autotest_common.sh@867 -- # local i 00:05:24.284 15:47:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.284 15:47:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.284 15:47:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:24.284 15:47:35 -- common/autotest_common.sh@871 -- # break 00:05:24.284 15:47:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.284 15:47:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.284 15:47:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.284 1+0 records in 00:05:24.284 1+0 records out 00:05:24.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220145 s, 18.6 MB/s 00:05:24.284 15:47:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.284 15:47:35 -- common/autotest_common.sh@884 -- # size=4096 00:05:24.284 15:47:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.284 15:47:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.284 15:47:35 -- common/autotest_common.sh@887 -- # return 0 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.284 { 00:05:24.284 "nbd_device": "/dev/nbd0", 00:05:24.284 "bdev_name": "Malloc0" 00:05:24.284 }, 00:05:24.284 { 00:05:24.284 "nbd_device": "/dev/nbd1", 00:05:24.284 "bdev_name": "Malloc1" 00:05:24.284 } 00:05:24.284 ]' 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.284 { 00:05:24.284 "nbd_device": "/dev/nbd0", 00:05:24.284 "bdev_name": "Malloc0" 00:05:24.284 }, 00:05:24.284 { 00:05:24.284 "nbd_device": "/dev/nbd1", 00:05:24.284 "bdev_name": "Malloc1" 00:05:24.284 } 00:05:24.284 ]' 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.284 /dev/nbd1' 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.284 /dev/nbd1' 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.284 15:47:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.543 256+0 records in 00:05:24.543 256+0 records out 00:05:24.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00763234 s, 137 MB/s 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.543 256+0 records in 00:05:24.543 256+0 records out 00:05:24.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019016 s, 55.1 MB/s 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.543 256+0 records in 00:05:24.543 256+0 records out 00:05:24.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187216 s, 56.0 MB/s 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@51 -- # local i 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.543 15:47:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.802 15:47:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.802 15:47:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.802 15:47:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.802 15:47:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.802 15:47:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.802 15:47:35 -- bdev/nbd_common.sh@41 -- # break 00:05:24.802 15:47:35 -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.802 15:47:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.802 15:47:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.802 15:47:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.802 15:47:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.802 15:47:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.802 15:47:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.802 15:47:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.802 15:47:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.802 15:47:36 -- bdev/nbd_common.sh@41 -- # break 00:05:24.802 15:47:36 -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.802 15:47:36 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.802 15:47:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.802 15:47:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.060 15:47:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.060 15:47:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.060 15:47:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.060 15:47:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.060 15:47:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.060 15:47:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.060 15:47:36 -- bdev/nbd_common.sh@65 -- # true 00:05:25.060 15:47:36 -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.060 15:47:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.060 15:47:36 -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.060 15:47:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.060 15:47:36 -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.060 15:47:36 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.318 15:47:36 -- event/event.sh@35 -- # sleep 3 00:05:25.884 [2024-11-29 15:47:37.299192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.143 [2024-11-29 15:47:37.429371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.143 [2024-11-29 15:47:37.429390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.143 [2024-11-29 15:47:37.534496] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.143 [2024-11-29 15:47:37.534539] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.672 15:47:39 -- event/event.sh@23 -- # for i in {0..2} 00:05:28.672 15:47:39 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:28.672 spdk_app_start Round 2 00:05:28.672 15:47:39 -- event/event.sh@25 -- # waitforlisten 57087 /var/tmp/spdk-nbd.sock 00:05:28.672 15:47:39 -- common/autotest_common.sh@829 -- # '[' -z 57087 ']' 00:05:28.672 15:47:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.672 15:47:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.672 15:47:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.672 15:47:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.672 15:47:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.672 15:47:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.672 15:47:39 -- common/autotest_common.sh@862 -- # return 0 00:05:28.672 15:47:39 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.672 Malloc0 00:05:28.672 15:47:40 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.930 Malloc1 00:05:28.930 15:47:40 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.930 15:47:40 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.930 15:47:40 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.930 15:47:40 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.930 15:47:40 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.930 15:47:40 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.930 15:47:40 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.930 15:47:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.930 15:47:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.930 15:47:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.930 15:47:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.930 15:47:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.930 15:47:40 -- bdev/nbd_common.sh@12 -- # local i 00:05:28.930 15:47:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.930 15:47:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.931 15:47:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.189 /dev/nbd0 00:05:29.189 15:47:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.189 15:47:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.189 15:47:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:29.189 15:47:40 -- common/autotest_common.sh@867 -- # local i 00:05:29.189 15:47:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:29.189 15:47:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:29.189 15:47:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:29.189 15:47:40 -- common/autotest_common.sh@871 -- # break 00:05:29.189 15:47:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:29.189 15:47:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:29.189 15:47:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.189 1+0 records in 00:05:29.189 1+0 records out 00:05:29.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177731 s, 23.0 MB/s 00:05:29.189 15:47:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.189 15:47:40 -- common/autotest_common.sh@884 -- # size=4096 00:05:29.189 15:47:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.189 15:47:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:29.189 15:47:40 -- common/autotest_common.sh@887 -- # return 0 00:05:29.189 15:47:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.189 15:47:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.189 15:47:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.448 /dev/nbd1 00:05:29.448 15:47:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.448 15:47:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.448 15:47:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:29.448 15:47:40 -- common/autotest_common.sh@867 -- # local i 00:05:29.448 15:47:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:29.448 15:47:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:29.448 15:47:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:29.448 15:47:40 -- common/autotest_common.sh@871 -- # break 00:05:29.448 15:47:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:29.448 15:47:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:29.448 15:47:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.448 1+0 records in 00:05:29.448 1+0 records out 00:05:29.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222178 s, 18.4 MB/s 00:05:29.448 15:47:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.448 15:47:40 -- common/autotest_common.sh@884 -- # size=4096 00:05:29.448 15:47:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.448 15:47:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:29.448 15:47:40 -- common/autotest_common.sh@887 -- # return 0 00:05:29.448 15:47:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.448 15:47:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.448 15:47:40 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.448 15:47:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.448 15:47:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.707 { 00:05:29.707 "nbd_device": "/dev/nbd0", 00:05:29.707 "bdev_name": "Malloc0" 00:05:29.707 }, 00:05:29.707 { 00:05:29.707 "nbd_device": "/dev/nbd1", 00:05:29.707 "bdev_name": "Malloc1" 00:05:29.707 } 00:05:29.707 ]' 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.707 { 00:05:29.707 "nbd_device": "/dev/nbd0", 00:05:29.707 "bdev_name": "Malloc0" 00:05:29.707 }, 00:05:29.707 { 00:05:29.707 "nbd_device": "/dev/nbd1", 00:05:29.707 "bdev_name": "Malloc1" 00:05:29.707 } 00:05:29.707 ]' 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.707 /dev/nbd1' 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.707 /dev/nbd1' 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.707 256+0 records in 00:05:29.707 256+0 records out 00:05:29.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00810099 s, 129 MB/s 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.707 256+0 records in 00:05:29.707 256+0 records out 00:05:29.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156357 s, 67.1 MB/s 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.707 256+0 records in 00:05:29.707 256+0 records out 00:05:29.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170796 s, 61.4 MB/s 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.707 15:47:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:29.707 15:47:41 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.707 15:47:41 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:29.707 15:47:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.707 15:47:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.707 15:47:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:29.707 15:47:41 -- bdev/nbd_common.sh@51 -- # local i 00:05:29.707 15:47:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.707 15:47:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.965 15:47:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.965 15:47:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.965 15:47:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.965 15:47:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.965 15:47:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.965 15:47:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.965 15:47:41 -- bdev/nbd_common.sh@41 -- # break 00:05:29.965 15:47:41 -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.965 15:47:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.965 15:47:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@41 -- # break 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@65 -- # true 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.224 15:47:41 -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.224 15:47:41 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.488 15:47:41 -- event/event.sh@35 -- # sleep 3 00:05:31.434 [2024-11-29 15:47:42.525100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.434 [2024-11-29 15:47:42.654352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.434 [2024-11-29 15:47:42.654492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.434 [2024-11-29 15:47:42.758613] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.434 [2024-11-29 15:47:42.758667] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.962 15:47:44 -- event/event.sh@38 -- # waitforlisten 57087 /var/tmp/spdk-nbd.sock 00:05:33.962 15:47:44 -- common/autotest_common.sh@829 -- # '[' -z 57087 ']' 00:05:33.962 15:47:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.962 15:47:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.962 15:47:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.962 15:47:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.962 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:05:33.962 15:47:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.962 15:47:45 -- common/autotest_common.sh@862 -- # return 0 00:05:33.962 15:47:45 -- event/event.sh@39 -- # killprocess 57087 00:05:33.962 15:47:45 -- common/autotest_common.sh@936 -- # '[' -z 57087 ']' 00:05:33.962 15:47:45 -- common/autotest_common.sh@940 -- # kill -0 57087 00:05:33.962 15:47:45 -- common/autotest_common.sh@941 -- # uname 00:05:33.962 15:47:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.962 15:47:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57087 00:05:33.962 killing process with pid 57087 00:05:33.962 15:47:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:33.962 15:47:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:33.962 15:47:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57087' 00:05:33.962 15:47:45 -- common/autotest_common.sh@955 -- # kill 57087 00:05:33.962 15:47:45 -- common/autotest_common.sh@960 -- # wait 57087 00:05:34.529 spdk_app_start is called in Round 0. 00:05:34.529 Shutdown signal received, stop current app iteration 00:05:34.529 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:34.529 spdk_app_start is called in Round 1. 00:05:34.529 Shutdown signal received, stop current app iteration 00:05:34.529 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:34.529 spdk_app_start is called in Round 2. 00:05:34.529 Shutdown signal received, stop current app iteration 00:05:34.529 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:34.529 spdk_app_start is called in Round 3. 00:05:34.529 Shutdown signal received, stop current app iteration 00:05:34.529 15:47:45 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:34.529 15:47:45 -- event/event.sh@42 -- # return 0 00:05:34.529 00:05:34.529 real 0m17.091s 00:05:34.529 user 0m36.481s 00:05:34.529 sys 0m1.965s 00:05:34.529 15:47:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.529 ************************************ 00:05:34.529 END TEST app_repeat 00:05:34.529 ************************************ 00:05:34.529 15:47:45 -- common/autotest_common.sh@10 -- # set +x 00:05:34.529 15:47:45 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:34.529 15:47:45 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:34.529 15:47:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.529 15:47:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.529 15:47:45 -- common/autotest_common.sh@10 -- # set +x 00:05:34.529 ************************************ 00:05:34.529 START TEST cpu_locks 00:05:34.529 ************************************ 00:05:34.529 15:47:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:34.529 * Looking for test storage... 00:05:34.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:34.529 15:47:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:34.529 15:47:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:34.529 15:47:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:34.529 15:47:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:34.529 15:47:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:34.529 15:47:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:34.529 15:47:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:34.529 15:47:45 -- scripts/common.sh@335 -- # IFS=.-: 00:05:34.529 15:47:45 -- scripts/common.sh@335 -- # read -ra ver1 00:05:34.529 15:47:45 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.529 15:47:45 -- scripts/common.sh@336 -- # read -ra ver2 00:05:34.529 15:47:45 -- scripts/common.sh@337 -- # local 'op=<' 00:05:34.529 15:47:45 -- scripts/common.sh@339 -- # ver1_l=2 00:05:34.529 15:47:45 -- scripts/common.sh@340 -- # ver2_l=1 00:05:34.529 15:47:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:34.529 15:47:45 -- scripts/common.sh@343 -- # case "$op" in 00:05:34.529 15:47:45 -- scripts/common.sh@344 -- # : 1 00:05:34.529 15:47:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:34.529 15:47:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.529 15:47:45 -- scripts/common.sh@364 -- # decimal 1 00:05:34.529 15:47:45 -- scripts/common.sh@352 -- # local d=1 00:05:34.529 15:47:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.529 15:47:45 -- scripts/common.sh@354 -- # echo 1 00:05:34.529 15:47:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:34.529 15:47:45 -- scripts/common.sh@365 -- # decimal 2 00:05:34.529 15:47:45 -- scripts/common.sh@352 -- # local d=2 00:05:34.529 15:47:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.529 15:47:45 -- scripts/common.sh@354 -- # echo 2 00:05:34.529 15:47:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:34.529 15:47:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:34.529 15:47:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:34.529 15:47:45 -- scripts/common.sh@367 -- # return 0 00:05:34.529 15:47:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.529 15:47:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:34.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.529 --rc genhtml_branch_coverage=1 00:05:34.529 --rc genhtml_function_coverage=1 00:05:34.529 --rc genhtml_legend=1 00:05:34.529 --rc geninfo_all_blocks=1 00:05:34.529 --rc geninfo_unexecuted_blocks=1 00:05:34.529 00:05:34.529 ' 00:05:34.529 15:47:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:34.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.529 --rc genhtml_branch_coverage=1 00:05:34.529 --rc genhtml_function_coverage=1 00:05:34.529 --rc genhtml_legend=1 00:05:34.529 --rc geninfo_all_blocks=1 00:05:34.529 --rc geninfo_unexecuted_blocks=1 00:05:34.529 00:05:34.529 ' 00:05:34.529 15:47:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:34.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.529 --rc genhtml_branch_coverage=1 00:05:34.529 --rc genhtml_function_coverage=1 00:05:34.529 --rc genhtml_legend=1 00:05:34.529 --rc geninfo_all_blocks=1 00:05:34.529 --rc geninfo_unexecuted_blocks=1 00:05:34.529 00:05:34.529 ' 00:05:34.529 15:47:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:34.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.530 --rc genhtml_branch_coverage=1 00:05:34.530 --rc genhtml_function_coverage=1 00:05:34.530 --rc genhtml_legend=1 00:05:34.530 --rc geninfo_all_blocks=1 00:05:34.530 --rc geninfo_unexecuted_blocks=1 00:05:34.530 00:05:34.530 ' 00:05:34.530 15:47:45 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:34.530 15:47:45 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:34.530 15:47:45 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:34.530 15:47:45 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:34.530 15:47:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.530 15:47:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.530 15:47:45 -- common/autotest_common.sh@10 -- # set +x 00:05:34.530 ************************************ 00:05:34.530 START TEST default_locks 00:05:34.530 ************************************ 00:05:34.530 15:47:45 -- common/autotest_common.sh@1114 -- # default_locks 00:05:34.530 15:47:45 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57511 00:05:34.530 15:47:45 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.530 15:47:45 -- event/cpu_locks.sh@47 -- # waitforlisten 57511 00:05:34.530 15:47:45 -- common/autotest_common.sh@829 -- # '[' -z 57511 ']' 00:05:34.530 15:47:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.530 15:47:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.530 15:47:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.530 15:47:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.530 15:47:45 -- common/autotest_common.sh@10 -- # set +x 00:05:34.788 [2024-11-29 15:47:45.978984] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.788 [2024-11-29 15:47:45.979092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57511 ] 00:05:34.788 [2024-11-29 15:47:46.127484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.045 [2024-11-29 15:47:46.263113] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.045 [2024-11-29 15:47:46.263263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.610 15:47:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.610 15:47:46 -- common/autotest_common.sh@862 -- # return 0 00:05:35.610 15:47:46 -- event/cpu_locks.sh@49 -- # locks_exist 57511 00:05:35.610 15:47:46 -- event/cpu_locks.sh@22 -- # lslocks -p 57511 00:05:35.610 15:47:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.610 15:47:46 -- event/cpu_locks.sh@50 -- # killprocess 57511 00:05:35.610 15:47:46 -- common/autotest_common.sh@936 -- # '[' -z 57511 ']' 00:05:35.610 15:47:46 -- common/autotest_common.sh@940 -- # kill -0 57511 00:05:35.610 15:47:46 -- common/autotest_common.sh@941 -- # uname 00:05:35.610 15:47:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:35.610 15:47:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57511 00:05:35.610 killing process with pid 57511 00:05:35.611 15:47:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:35.611 15:47:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:35.611 15:47:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57511' 00:05:35.611 15:47:46 -- common/autotest_common.sh@955 -- # kill 57511 00:05:35.611 15:47:46 -- common/autotest_common.sh@960 -- # wait 57511 00:05:37.027 15:47:48 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57511 00:05:37.027 15:47:48 -- common/autotest_common.sh@650 -- # local es=0 00:05:37.027 15:47:48 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57511 00:05:37.027 15:47:48 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:37.027 15:47:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.027 15:47:48 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:37.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.027 ERROR: process (pid: 57511) is no longer running 00:05:37.027 15:47:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.027 15:47:48 -- common/autotest_common.sh@653 -- # waitforlisten 57511 00:05:37.027 15:47:48 -- common/autotest_common.sh@829 -- # '[' -z 57511 ']' 00:05:37.027 15:47:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.027 15:47:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.027 15:47:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.027 15:47:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.027 15:47:48 -- common/autotest_common.sh@10 -- # set +x 00:05:37.027 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57511) - No such process 00:05:37.027 15:47:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.027 15:47:48 -- common/autotest_common.sh@862 -- # return 1 00:05:37.027 15:47:48 -- common/autotest_common.sh@653 -- # es=1 00:05:37.027 15:47:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:37.027 15:47:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:37.027 15:47:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:37.027 15:47:48 -- event/cpu_locks.sh@54 -- # no_locks 00:05:37.027 15:47:48 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.027 15:47:48 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.027 15:47:48 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.027 00:05:37.027 real 0m2.181s 00:05:37.027 user 0m2.154s 00:05:37.027 sys 0m0.387s 00:05:37.028 15:47:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.028 15:47:48 -- common/autotest_common.sh@10 -- # set +x 00:05:37.028 ************************************ 00:05:37.028 END TEST default_locks 00:05:37.028 ************************************ 00:05:37.028 15:47:48 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:37.028 15:47:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.028 15:47:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.028 15:47:48 -- common/autotest_common.sh@10 -- # set +x 00:05:37.028 ************************************ 00:05:37.028 START TEST default_locks_via_rpc 00:05:37.028 ************************************ 00:05:37.028 15:47:48 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:37.028 15:47:48 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57564 00:05:37.028 15:47:48 -- event/cpu_locks.sh@63 -- # waitforlisten 57564 00:05:37.028 15:47:48 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.028 15:47:48 -- common/autotest_common.sh@829 -- # '[' -z 57564 ']' 00:05:37.028 15:47:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.028 15:47:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.028 15:47:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.028 15:47:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.028 15:47:48 -- common/autotest_common.sh@10 -- # set +x 00:05:37.028 [2024-11-29 15:47:48.189901] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.028 [2024-11-29 15:47:48.189998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57564 ] 00:05:37.028 [2024-11-29 15:47:48.331898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.286 [2024-11-29 15:47:48.475424] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.286 [2024-11-29 15:47:48.475601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.852 15:47:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.852 15:47:49 -- common/autotest_common.sh@862 -- # return 0 00:05:37.852 15:47:49 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:37.852 15:47:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.852 15:47:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.852 15:47:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.852 15:47:49 -- event/cpu_locks.sh@67 -- # no_locks 00:05:37.852 15:47:49 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.852 15:47:49 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.852 15:47:49 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.852 15:47:49 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:37.852 15:47:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.852 15:47:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.852 15:47:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.852 15:47:49 -- event/cpu_locks.sh@71 -- # locks_exist 57564 00:05:37.852 15:47:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.852 15:47:49 -- event/cpu_locks.sh@22 -- # lslocks -p 57564 00:05:37.852 15:47:49 -- event/cpu_locks.sh@73 -- # killprocess 57564 00:05:37.852 15:47:49 -- common/autotest_common.sh@936 -- # '[' -z 57564 ']' 00:05:37.852 15:47:49 -- common/autotest_common.sh@940 -- # kill -0 57564 00:05:37.852 15:47:49 -- common/autotest_common.sh@941 -- # uname 00:05:37.852 15:47:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.852 15:47:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57564 00:05:37.852 killing process with pid 57564 00:05:37.852 15:47:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.852 15:47:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.852 15:47:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57564' 00:05:37.852 15:47:49 -- common/autotest_common.sh@955 -- # kill 57564 00:05:37.852 15:47:49 -- common/autotest_common.sh@960 -- # wait 57564 00:05:39.225 00:05:39.225 real 0m2.289s 00:05:39.225 user 0m2.318s 00:05:39.225 sys 0m0.387s 00:05:39.225 15:47:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.225 ************************************ 00:05:39.225 END TEST default_locks_via_rpc 00:05:39.225 ************************************ 00:05:39.225 15:47:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.225 15:47:50 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:39.225 15:47:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.225 15:47:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.225 15:47:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.225 ************************************ 00:05:39.225 START TEST non_locking_app_on_locked_coremask 00:05:39.225 ************************************ 00:05:39.225 15:47:50 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:39.225 15:47:50 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57621 00:05:39.225 15:47:50 -- event/cpu_locks.sh@81 -- # waitforlisten 57621 /var/tmp/spdk.sock 00:05:39.225 15:47:50 -- common/autotest_common.sh@829 -- # '[' -z 57621 ']' 00:05:39.225 15:47:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.225 15:47:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.225 15:47:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.225 15:47:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.225 15:47:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.225 15:47:50 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.225 [2024-11-29 15:47:50.537636] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.225 [2024-11-29 15:47:50.537746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57621 ] 00:05:39.483 [2024-11-29 15:47:50.682995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.483 [2024-11-29 15:47:50.824601] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.483 [2024-11-29 15:47:50.824756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.050 15:47:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.050 15:47:51 -- common/autotest_common.sh@862 -- # return 0 00:05:40.050 15:47:51 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57632 00:05:40.050 15:47:51 -- event/cpu_locks.sh@85 -- # waitforlisten 57632 /var/tmp/spdk2.sock 00:05:40.050 15:47:51 -- common/autotest_common.sh@829 -- # '[' -z 57632 ']' 00:05:40.050 15:47:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.050 15:47:51 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:40.050 15:47:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.050 15:47:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.050 15:47:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.050 15:47:51 -- common/autotest_common.sh@10 -- # set +x 00:05:40.050 [2024-11-29 15:47:51.419503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.050 [2024-11-29 15:47:51.419618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57632 ] 00:05:40.309 [2024-11-29 15:47:51.566922] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.309 [2024-11-29 15:47:51.566956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.567 [2024-11-29 15:47:51.847359] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.567 [2024-11-29 15:47:51.847506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.502 15:47:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.502 15:47:52 -- common/autotest_common.sh@862 -- # return 0 00:05:41.502 15:47:52 -- event/cpu_locks.sh@87 -- # locks_exist 57621 00:05:41.502 15:47:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.502 15:47:52 -- event/cpu_locks.sh@22 -- # lslocks -p 57621 00:05:42.159 15:47:53 -- event/cpu_locks.sh@89 -- # killprocess 57621 00:05:42.159 15:47:53 -- common/autotest_common.sh@936 -- # '[' -z 57621 ']' 00:05:42.159 15:47:53 -- common/autotest_common.sh@940 -- # kill -0 57621 00:05:42.159 15:47:53 -- common/autotest_common.sh@941 -- # uname 00:05:42.159 15:47:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.159 15:47:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57621 00:05:42.159 15:47:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.159 killing process with pid 57621 00:05:42.159 15:47:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.159 15:47:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57621' 00:05:42.159 15:47:53 -- common/autotest_common.sh@955 -- # kill 57621 00:05:42.159 15:47:53 -- common/autotest_common.sh@960 -- # wait 57621 00:05:44.714 15:47:55 -- event/cpu_locks.sh@90 -- # killprocess 57632 00:05:44.714 15:47:55 -- common/autotest_common.sh@936 -- # '[' -z 57632 ']' 00:05:44.714 15:47:55 -- common/autotest_common.sh@940 -- # kill -0 57632 00:05:44.714 15:47:55 -- common/autotest_common.sh@941 -- # uname 00:05:44.714 15:47:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:44.715 15:47:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57632 00:05:44.715 15:47:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:44.715 killing process with pid 57632 00:05:44.715 15:47:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:44.715 15:47:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57632' 00:05:44.715 15:47:55 -- common/autotest_common.sh@955 -- # kill 57632 00:05:44.715 15:47:55 -- common/autotest_common.sh@960 -- # wait 57632 00:05:45.649 00:05:45.649 real 0m6.325s 00:05:45.649 user 0m6.689s 00:05:45.649 sys 0m0.835s 00:05:45.649 15:47:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.649 ************************************ 00:05:45.649 END TEST non_locking_app_on_locked_coremask 00:05:45.649 ************************************ 00:05:45.649 15:47:56 -- common/autotest_common.sh@10 -- # set +x 00:05:45.650 15:47:56 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:45.650 15:47:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.650 15:47:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.650 15:47:56 -- common/autotest_common.sh@10 -- # set +x 00:05:45.650 ************************************ 00:05:45.650 START TEST locking_app_on_unlocked_coremask 00:05:45.650 ************************************ 00:05:45.650 15:47:56 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:45.650 15:47:56 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57731 00:05:45.650 15:47:56 -- event/cpu_locks.sh@99 -- # waitforlisten 57731 /var/tmp/spdk.sock 00:05:45.650 15:47:56 -- common/autotest_common.sh@829 -- # '[' -z 57731 ']' 00:05:45.650 15:47:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.650 15:47:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.650 15:47:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.650 15:47:56 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:45.650 15:47:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.650 15:47:56 -- common/autotest_common.sh@10 -- # set +x 00:05:45.650 [2024-11-29 15:47:56.930796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.650 [2024-11-29 15:47:56.930918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57731 ] 00:05:45.908 [2024-11-29 15:47:57.078628] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.908 [2024-11-29 15:47:57.078670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.908 [2024-11-29 15:47:57.224153] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.908 [2024-11-29 15:47:57.224307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.476 15:47:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.476 15:47:57 -- common/autotest_common.sh@862 -- # return 0 00:05:46.476 15:47:57 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57747 00:05:46.476 15:47:57 -- event/cpu_locks.sh@103 -- # waitforlisten 57747 /var/tmp/spdk2.sock 00:05:46.476 15:47:57 -- common/autotest_common.sh@829 -- # '[' -z 57747 ']' 00:05:46.476 15:47:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.476 15:47:57 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:46.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.476 15:47:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.476 15:47:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.476 15:47:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.476 15:47:57 -- common/autotest_common.sh@10 -- # set +x 00:05:46.476 [2024-11-29 15:47:57.818822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.476 [2024-11-29 15:47:57.818940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57747 ] 00:05:46.734 [2024-11-29 15:47:57.967217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.992 [2024-11-29 15:47:58.256555] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.992 [2024-11-29 15:47:58.256714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.926 15:47:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.926 15:47:59 -- common/autotest_common.sh@862 -- # return 0 00:05:47.926 15:47:59 -- event/cpu_locks.sh@105 -- # locks_exist 57747 00:05:47.926 15:47:59 -- event/cpu_locks.sh@22 -- # lslocks -p 57747 00:05:47.926 15:47:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.502 15:47:59 -- event/cpu_locks.sh@107 -- # killprocess 57731 00:05:48.502 15:47:59 -- common/autotest_common.sh@936 -- # '[' -z 57731 ']' 00:05:48.502 15:47:59 -- common/autotest_common.sh@940 -- # kill -0 57731 00:05:48.502 15:47:59 -- common/autotest_common.sh@941 -- # uname 00:05:48.502 15:47:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.502 15:47:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57731 00:05:48.502 15:47:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.502 15:47:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.502 killing process with pid 57731 00:05:48.502 15:47:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57731' 00:05:48.502 15:47:59 -- common/autotest_common.sh@955 -- # kill 57731 00:05:48.502 15:47:59 -- common/autotest_common.sh@960 -- # wait 57731 00:05:51.031 15:48:02 -- event/cpu_locks.sh@108 -- # killprocess 57747 00:05:51.031 15:48:02 -- common/autotest_common.sh@936 -- # '[' -z 57747 ']' 00:05:51.031 15:48:02 -- common/autotest_common.sh@940 -- # kill -0 57747 00:05:51.031 15:48:02 -- common/autotest_common.sh@941 -- # uname 00:05:51.031 15:48:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.031 15:48:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57747 00:05:51.031 15:48:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.031 killing process with pid 57747 00:05:51.031 15:48:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.031 15:48:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57747' 00:05:51.031 15:48:02 -- common/autotest_common.sh@955 -- # kill 57747 00:05:51.031 15:48:02 -- common/autotest_common.sh@960 -- # wait 57747 00:05:51.967 00:05:51.967 real 0m6.399s 00:05:51.967 user 0m6.781s 00:05:51.967 sys 0m0.834s 00:05:51.967 15:48:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.967 ************************************ 00:05:51.967 END TEST locking_app_on_unlocked_coremask 00:05:51.967 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:51.967 ************************************ 00:05:51.967 15:48:03 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:51.967 15:48:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.967 15:48:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.967 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:51.967 ************************************ 00:05:51.967 START TEST locking_app_on_locked_coremask 00:05:51.967 ************************************ 00:05:51.967 15:48:03 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:51.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.967 15:48:03 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57844 00:05:51.967 15:48:03 -- event/cpu_locks.sh@116 -- # waitforlisten 57844 /var/tmp/spdk.sock 00:05:51.967 15:48:03 -- common/autotest_common.sh@829 -- # '[' -z 57844 ']' 00:05:51.967 15:48:03 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.967 15:48:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.967 15:48:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.967 15:48:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.967 15:48:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.967 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:51.967 [2024-11-29 15:48:03.377349] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.967 [2024-11-29 15:48:03.377464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57844 ] 00:05:52.226 [2024-11-29 15:48:03.524323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.485 [2024-11-29 15:48:03.665761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.485 [2024-11-29 15:48:03.665922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.052 15:48:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.052 15:48:04 -- common/autotest_common.sh@862 -- # return 0 00:05:53.052 15:48:04 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57856 00:05:53.052 15:48:04 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.052 15:48:04 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57856 /var/tmp/spdk2.sock 00:05:53.052 15:48:04 -- common/autotest_common.sh@650 -- # local es=0 00:05:53.052 15:48:04 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57856 /var/tmp/spdk2.sock 00:05:53.052 15:48:04 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:53.052 15:48:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.052 15:48:04 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:53.052 15:48:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.052 15:48:04 -- common/autotest_common.sh@653 -- # waitforlisten 57856 /var/tmp/spdk2.sock 00:05:53.052 15:48:04 -- common/autotest_common.sh@829 -- # '[' -z 57856 ']' 00:05:53.052 15:48:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.052 15:48:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.053 15:48:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.053 15:48:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.053 15:48:04 -- common/autotest_common.sh@10 -- # set +x 00:05:53.053 [2024-11-29 15:48:04.244087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.053 [2024-11-29 15:48:04.244209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57856 ] 00:05:53.053 [2024-11-29 15:48:04.394141] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57844 has claimed it. 00:05:53.053 [2024-11-29 15:48:04.394186] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.636 ERROR: process (pid: 57856) is no longer running 00:05:53.636 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57856) - No such process 00:05:53.636 15:48:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.636 15:48:04 -- common/autotest_common.sh@862 -- # return 1 00:05:53.636 15:48:04 -- common/autotest_common.sh@653 -- # es=1 00:05:53.636 15:48:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.636 15:48:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.636 15:48:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.636 15:48:04 -- event/cpu_locks.sh@122 -- # locks_exist 57844 00:05:53.636 15:48:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.636 15:48:04 -- event/cpu_locks.sh@22 -- # lslocks -p 57844 00:05:53.636 15:48:05 -- event/cpu_locks.sh@124 -- # killprocess 57844 00:05:53.636 15:48:05 -- common/autotest_common.sh@936 -- # '[' -z 57844 ']' 00:05:53.636 15:48:05 -- common/autotest_common.sh@940 -- # kill -0 57844 00:05:53.636 15:48:05 -- common/autotest_common.sh@941 -- # uname 00:05:53.636 15:48:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.636 15:48:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57844 00:05:53.893 15:48:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.893 15:48:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.893 15:48:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57844' 00:05:53.893 killing process with pid 57844 00:05:53.893 15:48:05 -- common/autotest_common.sh@955 -- # kill 57844 00:05:53.893 15:48:05 -- common/autotest_common.sh@960 -- # wait 57844 00:05:54.827 00:05:54.827 real 0m2.950s 00:05:54.827 user 0m3.130s 00:05:54.827 sys 0m0.519s 00:05:54.827 15:48:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.827 ************************************ 00:05:54.827 END TEST locking_app_on_locked_coremask 00:05:54.827 15:48:06 -- common/autotest_common.sh@10 -- # set +x 00:05:54.827 ************************************ 00:05:55.085 15:48:06 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:55.085 15:48:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.085 15:48:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.085 15:48:06 -- common/autotest_common.sh@10 -- # set +x 00:05:55.085 ************************************ 00:05:55.085 START TEST locking_overlapped_coremask 00:05:55.085 ************************************ 00:05:55.085 15:48:06 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:55.085 15:48:06 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57909 00:05:55.085 15:48:06 -- event/cpu_locks.sh@133 -- # waitforlisten 57909 /var/tmp/spdk.sock 00:05:55.085 15:48:06 -- common/autotest_common.sh@829 -- # '[' -z 57909 ']' 00:05:55.085 15:48:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.085 15:48:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.085 15:48:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.085 15:48:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.085 15:48:06 -- common/autotest_common.sh@10 -- # set +x 00:05:55.085 15:48:06 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:55.085 [2024-11-29 15:48:06.376930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.086 [2024-11-29 15:48:06.377050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57909 ] 00:05:55.344 [2024-11-29 15:48:06.523913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.344 [2024-11-29 15:48:06.665311] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.344 [2024-11-29 15:48:06.665556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.344 [2024-11-29 15:48:06.665842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.344 [2024-11-29 15:48:06.665876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.911 15:48:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.911 15:48:07 -- common/autotest_common.sh@862 -- # return 0 00:05:55.911 15:48:07 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57927 00:05:55.911 15:48:07 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57927 /var/tmp/spdk2.sock 00:05:55.911 15:48:07 -- common/autotest_common.sh@650 -- # local es=0 00:05:55.911 15:48:07 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57927 /var/tmp/spdk2.sock 00:05:55.911 15:48:07 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:55.911 15:48:07 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:55.911 15:48:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.911 15:48:07 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:55.911 15:48:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.911 15:48:07 -- common/autotest_common.sh@653 -- # waitforlisten 57927 /var/tmp/spdk2.sock 00:05:55.911 15:48:07 -- common/autotest_common.sh@829 -- # '[' -z 57927 ']' 00:05:55.911 15:48:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.911 15:48:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.911 15:48:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.911 15:48:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.911 15:48:07 -- common/autotest_common.sh@10 -- # set +x 00:05:55.911 [2024-11-29 15:48:07.259070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.911 [2024-11-29 15:48:07.259186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57927 ] 00:05:56.169 [2024-11-29 15:48:07.411863] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57909 has claimed it. 00:05:56.169 [2024-11-29 15:48:07.416022] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.734 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57927) - No such process 00:05:56.734 ERROR: process (pid: 57927) is no longer running 00:05:56.734 15:48:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.734 15:48:07 -- common/autotest_common.sh@862 -- # return 1 00:05:56.734 15:48:07 -- common/autotest_common.sh@653 -- # es=1 00:05:56.734 15:48:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.734 15:48:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:56.734 15:48:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.734 15:48:07 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:56.734 15:48:07 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:56.734 15:48:07 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:56.734 15:48:07 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:56.734 15:48:07 -- event/cpu_locks.sh@141 -- # killprocess 57909 00:05:56.734 15:48:07 -- common/autotest_common.sh@936 -- # '[' -z 57909 ']' 00:05:56.734 15:48:07 -- common/autotest_common.sh@940 -- # kill -0 57909 00:05:56.734 15:48:07 -- common/autotest_common.sh@941 -- # uname 00:05:56.734 15:48:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.734 15:48:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57909 00:05:56.734 15:48:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.734 15:48:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.734 15:48:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57909' 00:05:56.734 killing process with pid 57909 00:05:56.734 15:48:07 -- common/autotest_common.sh@955 -- # kill 57909 00:05:56.734 15:48:07 -- common/autotest_common.sh@960 -- # wait 57909 00:05:58.104 00:05:58.104 real 0m2.801s 00:05:58.104 user 0m7.371s 00:05:58.104 sys 0m0.401s 00:05:58.104 15:48:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.105 15:48:09 -- common/autotest_common.sh@10 -- # set +x 00:05:58.105 ************************************ 00:05:58.105 END TEST locking_overlapped_coremask 00:05:58.105 ************************************ 00:05:58.105 15:48:09 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:58.105 15:48:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.105 15:48:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.105 15:48:09 -- common/autotest_common.sh@10 -- # set +x 00:05:58.105 ************************************ 00:05:58.105 START TEST locking_overlapped_coremask_via_rpc 00:05:58.105 ************************************ 00:05:58.105 15:48:09 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:58.105 15:48:09 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=57980 00:05:58.105 15:48:09 -- event/cpu_locks.sh@149 -- # waitforlisten 57980 /var/tmp/spdk.sock 00:05:58.105 15:48:09 -- common/autotest_common.sh@829 -- # '[' -z 57980 ']' 00:05:58.105 15:48:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.105 15:48:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.105 15:48:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.105 15:48:09 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:58.105 15:48:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.105 15:48:09 -- common/autotest_common.sh@10 -- # set +x 00:05:58.105 [2024-11-29 15:48:09.220381] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.105 [2024-11-29 15:48:09.220492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57980 ] 00:05:58.105 [2024-11-29 15:48:09.363609] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.105 [2024-11-29 15:48:09.363648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.105 [2024-11-29 15:48:09.517583] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.105 [2024-11-29 15:48:09.517854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.105 [2024-11-29 15:48:09.518141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.105 [2024-11-29 15:48:09.518260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.671 15:48:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.671 15:48:09 -- common/autotest_common.sh@862 -- # return 0 00:05:58.671 15:48:09 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=57998 00:05:58.671 15:48:09 -- event/cpu_locks.sh@153 -- # waitforlisten 57998 /var/tmp/spdk2.sock 00:05:58.671 15:48:09 -- common/autotest_common.sh@829 -- # '[' -z 57998 ']' 00:05:58.671 15:48:09 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:58.671 15:48:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.671 15:48:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.671 15:48:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.671 15:48:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.671 15:48:09 -- common/autotest_common.sh@10 -- # set +x 00:05:58.671 [2024-11-29 15:48:10.054256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.671 [2024-11-29 15:48:10.054362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57998 ] 00:05:58.929 [2024-11-29 15:48:10.210109] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.929 [2024-11-29 15:48:10.214003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.187 [2024-11-29 15:48:10.575710] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.187 [2024-11-29 15:48:10.576049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.187 [2024-11-29 15:48:10.576359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.187 [2024-11-29 15:48:10.576389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:01.083 15:48:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.083 15:48:12 -- common/autotest_common.sh@862 -- # return 0 00:06:01.083 15:48:12 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.083 15:48:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.083 15:48:12 -- common/autotest_common.sh@10 -- # set +x 00:06:01.083 15:48:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.083 15:48:12 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.083 15:48:12 -- common/autotest_common.sh@650 -- # local es=0 00:06:01.083 15:48:12 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.083 15:48:12 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:01.083 15:48:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.083 15:48:12 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:01.083 15:48:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.083 15:48:12 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.083 15:48:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.083 15:48:12 -- common/autotest_common.sh@10 -- # set +x 00:06:01.083 [2024-11-29 15:48:12.246145] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57980 has claimed it. 00:06:01.083 request: 00:06:01.083 { 00:06:01.083 "method": "framework_enable_cpumask_locks", 00:06:01.083 "req_id": 1 00:06:01.083 } 00:06:01.083 Got JSON-RPC error response 00:06:01.083 response: 00:06:01.083 { 00:06:01.083 "code": -32603, 00:06:01.083 "message": "Failed to claim CPU core: 2" 00:06:01.083 } 00:06:01.083 15:48:12 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:01.083 15:48:12 -- common/autotest_common.sh@653 -- # es=1 00:06:01.083 15:48:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.083 15:48:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.083 15:48:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.083 15:48:12 -- event/cpu_locks.sh@158 -- # waitforlisten 57980 /var/tmp/spdk.sock 00:06:01.083 15:48:12 -- common/autotest_common.sh@829 -- # '[' -z 57980 ']' 00:06:01.083 15:48:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.083 15:48:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.083 15:48:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.083 15:48:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.083 15:48:12 -- common/autotest_common.sh@10 -- # set +x 00:06:01.083 15:48:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.083 15:48:12 -- common/autotest_common.sh@862 -- # return 0 00:06:01.083 15:48:12 -- event/cpu_locks.sh@159 -- # waitforlisten 57998 /var/tmp/spdk2.sock 00:06:01.083 15:48:12 -- common/autotest_common.sh@829 -- # '[' -z 57998 ']' 00:06:01.083 15:48:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.083 15:48:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.083 15:48:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.083 15:48:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.083 15:48:12 -- common/autotest_common.sh@10 -- # set +x 00:06:01.341 15:48:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.341 15:48:12 -- common/autotest_common.sh@862 -- # return 0 00:06:01.341 15:48:12 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:01.341 15:48:12 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.341 15:48:12 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.341 15:48:12 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.341 00:06:01.341 real 0m3.497s 00:06:01.341 user 0m1.244s 00:06:01.341 sys 0m0.183s 00:06:01.341 15:48:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.341 15:48:12 -- common/autotest_common.sh@10 -- # set +x 00:06:01.341 ************************************ 00:06:01.341 END TEST locking_overlapped_coremask_via_rpc 00:06:01.341 ************************************ 00:06:01.341 15:48:12 -- event/cpu_locks.sh@174 -- # cleanup 00:06:01.341 15:48:12 -- event/cpu_locks.sh@15 -- # [[ -z 57980 ]] 00:06:01.341 15:48:12 -- event/cpu_locks.sh@15 -- # killprocess 57980 00:06:01.341 15:48:12 -- common/autotest_common.sh@936 -- # '[' -z 57980 ']' 00:06:01.341 15:48:12 -- common/autotest_common.sh@940 -- # kill -0 57980 00:06:01.341 15:48:12 -- common/autotest_common.sh@941 -- # uname 00:06:01.341 15:48:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.341 15:48:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57980 00:06:01.341 15:48:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.341 killing process with pid 57980 00:06:01.341 15:48:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.341 15:48:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57980' 00:06:01.341 15:48:12 -- common/autotest_common.sh@955 -- # kill 57980 00:06:01.341 15:48:12 -- common/autotest_common.sh@960 -- # wait 57980 00:06:02.714 15:48:13 -- event/cpu_locks.sh@16 -- # [[ -z 57998 ]] 00:06:02.714 15:48:13 -- event/cpu_locks.sh@16 -- # killprocess 57998 00:06:02.714 15:48:13 -- common/autotest_common.sh@936 -- # '[' -z 57998 ']' 00:06:02.714 15:48:13 -- common/autotest_common.sh@940 -- # kill -0 57998 00:06:02.714 15:48:13 -- common/autotest_common.sh@941 -- # uname 00:06:02.714 15:48:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.714 15:48:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57998 00:06:02.714 15:48:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:02.714 killing process with pid 57998 00:06:02.714 15:48:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:02.714 15:48:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57998' 00:06:02.714 15:48:13 -- common/autotest_common.sh@955 -- # kill 57998 00:06:02.714 15:48:13 -- common/autotest_common.sh@960 -- # wait 57998 00:06:04.091 15:48:15 -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.091 15:48:15 -- event/cpu_locks.sh@1 -- # cleanup 00:06:04.091 15:48:15 -- event/cpu_locks.sh@15 -- # [[ -z 57980 ]] 00:06:04.091 15:48:15 -- event/cpu_locks.sh@15 -- # killprocess 57980 00:06:04.091 15:48:15 -- common/autotest_common.sh@936 -- # '[' -z 57980 ']' 00:06:04.091 Process with pid 57980 is not found 00:06:04.091 15:48:15 -- common/autotest_common.sh@940 -- # kill -0 57980 00:06:04.091 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (57980) - No such process 00:06:04.092 15:48:15 -- common/autotest_common.sh@963 -- # echo 'Process with pid 57980 is not found' 00:06:04.092 15:48:15 -- event/cpu_locks.sh@16 -- # [[ -z 57998 ]] 00:06:04.092 15:48:15 -- event/cpu_locks.sh@16 -- # killprocess 57998 00:06:04.092 15:48:15 -- common/autotest_common.sh@936 -- # '[' -z 57998 ']' 00:06:04.092 15:48:15 -- common/autotest_common.sh@940 -- # kill -0 57998 00:06:04.092 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (57998) - No such process 00:06:04.092 Process with pid 57998 is not found 00:06:04.092 15:48:15 -- common/autotest_common.sh@963 -- # echo 'Process with pid 57998 is not found' 00:06:04.092 15:48:15 -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.092 00:06:04.092 real 0m29.622s 00:06:04.092 user 0m53.049s 00:06:04.092 sys 0m4.365s 00:06:04.092 15:48:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.092 15:48:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.092 ************************************ 00:06:04.092 END TEST cpu_locks 00:06:04.092 ************************************ 00:06:04.092 ************************************ 00:06:04.092 END TEST event 00:06:04.092 ************************************ 00:06:04.092 00:06:04.092 real 0m55.513s 00:06:04.092 user 1m42.477s 00:06:04.092 sys 0m7.148s 00:06:04.092 15:48:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.092 15:48:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.092 15:48:15 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:04.092 15:48:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.092 15:48:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.092 15:48:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.092 ************************************ 00:06:04.092 START TEST thread 00:06:04.092 ************************************ 00:06:04.092 15:48:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:04.092 * Looking for test storage... 00:06:04.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:04.092 15:48:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:04.092 15:48:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:04.092 15:48:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:04.350 15:48:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:04.350 15:48:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:04.350 15:48:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:04.350 15:48:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:04.350 15:48:15 -- scripts/common.sh@335 -- # IFS=.-: 00:06:04.350 15:48:15 -- scripts/common.sh@335 -- # read -ra ver1 00:06:04.350 15:48:15 -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.350 15:48:15 -- scripts/common.sh@336 -- # read -ra ver2 00:06:04.350 15:48:15 -- scripts/common.sh@337 -- # local 'op=<' 00:06:04.350 15:48:15 -- scripts/common.sh@339 -- # ver1_l=2 00:06:04.350 15:48:15 -- scripts/common.sh@340 -- # ver2_l=1 00:06:04.350 15:48:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:04.350 15:48:15 -- scripts/common.sh@343 -- # case "$op" in 00:06:04.350 15:48:15 -- scripts/common.sh@344 -- # : 1 00:06:04.350 15:48:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:04.350 15:48:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.350 15:48:15 -- scripts/common.sh@364 -- # decimal 1 00:06:04.350 15:48:15 -- scripts/common.sh@352 -- # local d=1 00:06:04.350 15:48:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.350 15:48:15 -- scripts/common.sh@354 -- # echo 1 00:06:04.350 15:48:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:04.350 15:48:15 -- scripts/common.sh@365 -- # decimal 2 00:06:04.350 15:48:15 -- scripts/common.sh@352 -- # local d=2 00:06:04.350 15:48:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.350 15:48:15 -- scripts/common.sh@354 -- # echo 2 00:06:04.350 15:48:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:04.350 15:48:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:04.350 15:48:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:04.350 15:48:15 -- scripts/common.sh@367 -- # return 0 00:06:04.350 15:48:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.350 15:48:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:04.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.350 --rc genhtml_branch_coverage=1 00:06:04.350 --rc genhtml_function_coverage=1 00:06:04.350 --rc genhtml_legend=1 00:06:04.350 --rc geninfo_all_blocks=1 00:06:04.350 --rc geninfo_unexecuted_blocks=1 00:06:04.350 00:06:04.350 ' 00:06:04.350 15:48:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:04.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.350 --rc genhtml_branch_coverage=1 00:06:04.350 --rc genhtml_function_coverage=1 00:06:04.350 --rc genhtml_legend=1 00:06:04.350 --rc geninfo_all_blocks=1 00:06:04.350 --rc geninfo_unexecuted_blocks=1 00:06:04.350 00:06:04.350 ' 00:06:04.350 15:48:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:04.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.350 --rc genhtml_branch_coverage=1 00:06:04.350 --rc genhtml_function_coverage=1 00:06:04.350 --rc genhtml_legend=1 00:06:04.350 --rc geninfo_all_blocks=1 00:06:04.350 --rc geninfo_unexecuted_blocks=1 00:06:04.350 00:06:04.350 ' 00:06:04.350 15:48:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:04.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.350 --rc genhtml_branch_coverage=1 00:06:04.350 --rc genhtml_function_coverage=1 00:06:04.350 --rc genhtml_legend=1 00:06:04.350 --rc geninfo_all_blocks=1 00:06:04.350 --rc geninfo_unexecuted_blocks=1 00:06:04.350 00:06:04.350 ' 00:06:04.351 15:48:15 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.351 15:48:15 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:04.351 15:48:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.351 15:48:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.351 ************************************ 00:06:04.351 START TEST thread_poller_perf 00:06:04.351 ************************************ 00:06:04.351 15:48:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.351 [2024-11-29 15:48:15.613787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.351 [2024-11-29 15:48:15.613902] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58161 ] 00:06:04.351 [2024-11-29 15:48:15.760562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.609 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:04.609 [2024-11-29 15:48:15.903352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.114 [2024-11-29T15:48:17.545Z] ====================================== 00:06:06.114 [2024-11-29T15:48:17.545Z] busy:2612476856 (cyc) 00:06:06.114 [2024-11-29T15:48:17.545Z] total_run_count: 378000 00:06:06.114 [2024-11-29T15:48:17.545Z] tsc_hz: 2600000000 (cyc) 00:06:06.114 [2024-11-29T15:48:17.545Z] ====================================== 00:06:06.114 [2024-11-29T15:48:17.545Z] poller_cost: 6911 (cyc), 2658 (nsec) 00:06:06.114 ************************************ 00:06:06.114 END TEST thread_poller_perf 00:06:06.114 ************************************ 00:06:06.114 00:06:06.114 real 0m1.539s 00:06:06.114 user 0m1.360s 00:06:06.114 sys 0m0.072s 00:06:06.114 15:48:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.114 15:48:17 -- common/autotest_common.sh@10 -- # set +x 00:06:06.114 15:48:17 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.114 15:48:17 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:06.114 15:48:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.114 15:48:17 -- common/autotest_common.sh@10 -- # set +x 00:06:06.114 ************************************ 00:06:06.114 START TEST thread_poller_perf 00:06:06.114 ************************************ 00:06:06.114 15:48:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.114 [2024-11-29 15:48:17.195529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.114 [2024-11-29 15:48:17.195631] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58203 ] 00:06:06.114 [2024-11-29 15:48:17.343254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.114 [2024-11-29 15:48:17.485505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.114 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:07.488 [2024-11-29T15:48:18.919Z] ====================================== 00:06:07.488 [2024-11-29T15:48:18.919Z] busy:2603644258 (cyc) 00:06:07.488 [2024-11-29T15:48:18.919Z] total_run_count: 5219000 00:06:07.488 [2024-11-29T15:48:18.919Z] tsc_hz: 2600000000 (cyc) 00:06:07.488 [2024-11-29T15:48:18.919Z] ====================================== 00:06:07.488 [2024-11-29T15:48:18.919Z] poller_cost: 498 (cyc), 191 (nsec) 00:06:07.488 ************************************ 00:06:07.488 END TEST thread_poller_perf 00:06:07.488 ************************************ 00:06:07.488 00:06:07.488 real 0m1.531s 00:06:07.488 user 0m1.355s 00:06:07.488 sys 0m0.069s 00:06:07.488 15:48:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.488 15:48:18 -- common/autotest_common.sh@10 -- # set +x 00:06:07.488 15:48:18 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:07.488 ************************************ 00:06:07.488 END TEST thread 00:06:07.488 ************************************ 00:06:07.488 00:06:07.488 real 0m3.291s 00:06:07.488 user 0m2.814s 00:06:07.488 sys 0m0.264s 00:06:07.488 15:48:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.488 15:48:18 -- common/autotest_common.sh@10 -- # set +x 00:06:07.488 15:48:18 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:07.488 15:48:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.488 15:48:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.488 15:48:18 -- common/autotest_common.sh@10 -- # set +x 00:06:07.488 ************************************ 00:06:07.488 START TEST accel 00:06:07.488 ************************************ 00:06:07.488 15:48:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:07.488 * Looking for test storage... 00:06:07.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:07.488 15:48:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:07.488 15:48:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:07.488 15:48:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:07.489 15:48:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:07.489 15:48:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:07.489 15:48:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:07.489 15:48:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:07.489 15:48:18 -- scripts/common.sh@335 -- # IFS=.-: 00:06:07.489 15:48:18 -- scripts/common.sh@335 -- # read -ra ver1 00:06:07.489 15:48:18 -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.489 15:48:18 -- scripts/common.sh@336 -- # read -ra ver2 00:06:07.489 15:48:18 -- scripts/common.sh@337 -- # local 'op=<' 00:06:07.489 15:48:18 -- scripts/common.sh@339 -- # ver1_l=2 00:06:07.489 15:48:18 -- scripts/common.sh@340 -- # ver2_l=1 00:06:07.489 15:48:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:07.489 15:48:18 -- scripts/common.sh@343 -- # case "$op" in 00:06:07.489 15:48:18 -- scripts/common.sh@344 -- # : 1 00:06:07.489 15:48:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:07.489 15:48:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.489 15:48:18 -- scripts/common.sh@364 -- # decimal 1 00:06:07.489 15:48:18 -- scripts/common.sh@352 -- # local d=1 00:06:07.489 15:48:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.489 15:48:18 -- scripts/common.sh@354 -- # echo 1 00:06:07.489 15:48:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:07.489 15:48:18 -- scripts/common.sh@365 -- # decimal 2 00:06:07.489 15:48:18 -- scripts/common.sh@352 -- # local d=2 00:06:07.489 15:48:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.489 15:48:18 -- scripts/common.sh@354 -- # echo 2 00:06:07.489 15:48:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:07.489 15:48:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:07.489 15:48:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:07.489 15:48:18 -- scripts/common.sh@367 -- # return 0 00:06:07.489 15:48:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.489 15:48:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:07.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.489 --rc genhtml_branch_coverage=1 00:06:07.489 --rc genhtml_function_coverage=1 00:06:07.489 --rc genhtml_legend=1 00:06:07.489 --rc geninfo_all_blocks=1 00:06:07.489 --rc geninfo_unexecuted_blocks=1 00:06:07.489 00:06:07.489 ' 00:06:07.489 15:48:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:07.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.489 --rc genhtml_branch_coverage=1 00:06:07.489 --rc genhtml_function_coverage=1 00:06:07.489 --rc genhtml_legend=1 00:06:07.489 --rc geninfo_all_blocks=1 00:06:07.489 --rc geninfo_unexecuted_blocks=1 00:06:07.489 00:06:07.489 ' 00:06:07.489 15:48:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:07.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.489 --rc genhtml_branch_coverage=1 00:06:07.489 --rc genhtml_function_coverage=1 00:06:07.489 --rc genhtml_legend=1 00:06:07.489 --rc geninfo_all_blocks=1 00:06:07.489 --rc geninfo_unexecuted_blocks=1 00:06:07.489 00:06:07.489 ' 00:06:07.489 15:48:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:07.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.489 --rc genhtml_branch_coverage=1 00:06:07.489 --rc genhtml_function_coverage=1 00:06:07.489 --rc genhtml_legend=1 00:06:07.489 --rc geninfo_all_blocks=1 00:06:07.489 --rc geninfo_unexecuted_blocks=1 00:06:07.489 00:06:07.489 ' 00:06:07.489 15:48:18 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:07.489 15:48:18 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:07.489 15:48:18 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:07.489 15:48:18 -- accel/accel.sh@59 -- # spdk_tgt_pid=58280 00:06:07.489 15:48:18 -- accel/accel.sh@60 -- # waitforlisten 58280 00:06:07.489 15:48:18 -- common/autotest_common.sh@829 -- # '[' -z 58280 ']' 00:06:07.489 15:48:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.489 15:48:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.489 15:48:18 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:07.489 15:48:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.489 15:48:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.489 15:48:18 -- accel/accel.sh@58 -- # build_accel_config 00:06:07.489 15:48:18 -- common/autotest_common.sh@10 -- # set +x 00:06:07.489 15:48:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.489 15:48:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.489 15:48:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.489 15:48:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.489 15:48:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.489 15:48:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.489 15:48:18 -- accel/accel.sh@42 -- # jq -r . 00:06:07.748 [2024-11-29 15:48:18.973070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.748 [2024-11-29 15:48:18.973186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58280 ] 00:06:07.748 [2024-11-29 15:48:19.120126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.005 [2024-11-29 15:48:19.264121] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.005 [2024-11-29 15:48:19.264292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.572 15:48:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.572 15:48:19 -- common/autotest_common.sh@862 -- # return 0 00:06:08.572 15:48:19 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:08.572 15:48:19 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:08.572 15:48:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.572 15:48:19 -- common/autotest_common.sh@10 -- # set +x 00:06:08.572 15:48:19 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:08.572 15:48:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.572 15:48:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # IFS== 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.572 15:48:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.572 15:48:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # IFS== 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.572 15:48:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.572 15:48:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # IFS== 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.572 15:48:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.572 15:48:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # IFS== 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.572 15:48:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.572 15:48:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # IFS== 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.572 15:48:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.572 15:48:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # IFS== 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.572 15:48:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.572 15:48:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # IFS== 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.572 15:48:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.572 15:48:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # IFS== 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.572 15:48:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.572 15:48:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # IFS== 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.572 15:48:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.572 15:48:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.572 15:48:19 -- accel/accel.sh@64 -- # IFS== 00:06:08.573 15:48:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.573 15:48:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.573 15:48:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.573 15:48:19 -- accel/accel.sh@64 -- # IFS== 00:06:08.573 15:48:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.573 15:48:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.573 15:48:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.573 15:48:19 -- accel/accel.sh@64 -- # IFS== 00:06:08.573 15:48:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.573 15:48:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.573 15:48:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.573 15:48:19 -- accel/accel.sh@64 -- # IFS== 00:06:08.573 15:48:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.573 15:48:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.573 15:48:19 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.573 15:48:19 -- accel/accel.sh@64 -- # IFS== 00:06:08.573 15:48:19 -- accel/accel.sh@64 -- # read -r opc module 00:06:08.573 15:48:19 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:08.573 15:48:19 -- accel/accel.sh@67 -- # killprocess 58280 00:06:08.573 15:48:19 -- common/autotest_common.sh@936 -- # '[' -z 58280 ']' 00:06:08.573 15:48:19 -- common/autotest_common.sh@940 -- # kill -0 58280 00:06:08.573 15:48:19 -- common/autotest_common.sh@941 -- # uname 00:06:08.573 15:48:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.573 15:48:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58280 00:06:08.573 15:48:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.573 15:48:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.573 killing process with pid 58280 00:06:08.573 15:48:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58280' 00:06:08.573 15:48:19 -- common/autotest_common.sh@955 -- # kill 58280 00:06:08.573 15:48:19 -- common/autotest_common.sh@960 -- # wait 58280 00:06:09.950 15:48:21 -- accel/accel.sh@68 -- # trap - ERR 00:06:09.950 15:48:21 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:09.950 15:48:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:09.950 15:48:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.950 15:48:21 -- common/autotest_common.sh@10 -- # set +x 00:06:09.950 15:48:21 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:09.950 15:48:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:09.950 15:48:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.950 15:48:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.950 15:48:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.950 15:48:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.950 15:48:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.950 15:48:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.950 15:48:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.950 15:48:21 -- accel/accel.sh@42 -- # jq -r . 00:06:09.950 15:48:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.950 15:48:21 -- common/autotest_common.sh@10 -- # set +x 00:06:09.950 15:48:21 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:09.950 15:48:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:09.950 15:48:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.950 15:48:21 -- common/autotest_common.sh@10 -- # set +x 00:06:09.950 ************************************ 00:06:09.950 START TEST accel_missing_filename 00:06:09.950 ************************************ 00:06:09.950 15:48:21 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:09.950 15:48:21 -- common/autotest_common.sh@650 -- # local es=0 00:06:09.950 15:48:21 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:09.950 15:48:21 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:09.950 15:48:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.950 15:48:21 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:09.950 15:48:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.950 15:48:21 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:09.950 15:48:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:09.950 15:48:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.950 15:48:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.950 15:48:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.950 15:48:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.950 15:48:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.950 15:48:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.950 15:48:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.950 15:48:21 -- accel/accel.sh@42 -- # jq -r . 00:06:09.950 [2024-11-29 15:48:21.181453] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.950 [2024-11-29 15:48:21.181627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58350 ] 00:06:09.950 [2024-11-29 15:48:21.318470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.208 [2024-11-29 15:48:21.462616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.208 [2024-11-29 15:48:21.575722] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.467 [2024-11-29 15:48:21.841354] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:10.726 A filename is required. 00:06:10.726 15:48:22 -- common/autotest_common.sh@653 -- # es=234 00:06:10.726 15:48:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.726 ************************************ 00:06:10.726 END TEST accel_missing_filename 00:06:10.726 ************************************ 00:06:10.726 15:48:22 -- common/autotest_common.sh@662 -- # es=106 00:06:10.726 15:48:22 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:10.726 15:48:22 -- common/autotest_common.sh@670 -- # es=1 00:06:10.726 15:48:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.726 00:06:10.726 real 0m0.898s 00:06:10.726 user 0m0.727s 00:06:10.726 sys 0m0.095s 00:06:10.726 15:48:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.726 15:48:22 -- common/autotest_common.sh@10 -- # set +x 00:06:10.726 15:48:22 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:10.726 15:48:22 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:10.726 15:48:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.726 15:48:22 -- common/autotest_common.sh@10 -- # set +x 00:06:10.726 ************************************ 00:06:10.726 START TEST accel_compress_verify 00:06:10.726 ************************************ 00:06:10.726 15:48:22 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:10.726 15:48:22 -- common/autotest_common.sh@650 -- # local es=0 00:06:10.726 15:48:22 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:10.726 15:48:22 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:10.726 15:48:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.726 15:48:22 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:10.726 15:48:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.726 15:48:22 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:10.726 15:48:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:10.726 15:48:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.726 15:48:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.726 15:48:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.726 15:48:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.726 15:48:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.726 15:48:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.726 15:48:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.726 15:48:22 -- accel/accel.sh@42 -- # jq -r . 00:06:10.726 [2024-11-29 15:48:22.109059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.726 [2024-11-29 15:48:22.109180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58381 ] 00:06:10.984 [2024-11-29 15:48:22.256249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.984 [2024-11-29 15:48:22.394773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.242 [2024-11-29 15:48:22.507551] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.500 [2024-11-29 15:48:22.772212] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:11.760 00:06:11.760 Compression does not support the verify option, aborting. 00:06:11.760 15:48:22 -- common/autotest_common.sh@653 -- # es=161 00:06:11.760 15:48:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.760 15:48:22 -- common/autotest_common.sh@662 -- # es=33 00:06:11.760 15:48:22 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:11.760 15:48:22 -- common/autotest_common.sh@670 -- # es=1 00:06:11.760 15:48:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.760 ************************************ 00:06:11.760 END TEST accel_compress_verify 00:06:11.760 ************************************ 00:06:11.760 00:06:11.760 real 0m0.908s 00:06:11.760 user 0m0.723s 00:06:11.760 sys 0m0.108s 00:06:11.760 15:48:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.760 15:48:22 -- common/autotest_common.sh@10 -- # set +x 00:06:11.760 15:48:23 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:11.760 15:48:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:11.760 15:48:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.760 15:48:23 -- common/autotest_common.sh@10 -- # set +x 00:06:11.760 ************************************ 00:06:11.760 START TEST accel_wrong_workload 00:06:11.760 ************************************ 00:06:11.760 15:48:23 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:11.760 15:48:23 -- common/autotest_common.sh@650 -- # local es=0 00:06:11.760 15:48:23 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:11.760 15:48:23 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:11.760 15:48:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.760 15:48:23 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:11.760 15:48:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.760 15:48:23 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:11.760 15:48:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:11.760 15:48:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.760 15:48:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.760 15:48:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.760 15:48:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.760 15:48:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.760 15:48:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.760 15:48:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.760 15:48:23 -- accel/accel.sh@42 -- # jq -r . 00:06:11.760 Unsupported workload type: foobar 00:06:11.760 [2024-11-29 15:48:23.060338] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:11.760 accel_perf options: 00:06:11.760 [-h help message] 00:06:11.760 [-q queue depth per core] 00:06:11.760 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:11.760 [-T number of threads per core 00:06:11.760 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:11.760 [-t time in seconds] 00:06:11.760 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:11.760 [ dif_verify, , dif_generate, dif_generate_copy 00:06:11.760 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:11.760 [-l for compress/decompress workloads, name of uncompressed input file 00:06:11.760 [-S for crc32c workload, use this seed value (default 0) 00:06:11.760 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:11.760 [-f for fill workload, use this BYTE value (default 255) 00:06:11.760 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:11.760 [-y verify result if this switch is on] 00:06:11.760 [-a tasks to allocate per core (default: same value as -q)] 00:06:11.760 Can be used to spread operations across a wider range of memory. 00:06:11.760 15:48:23 -- common/autotest_common.sh@653 -- # es=1 00:06:11.760 15:48:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.760 15:48:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.760 15:48:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.760 00:06:11.760 real 0m0.053s 00:06:11.760 user 0m0.056s 00:06:11.760 sys 0m0.025s 00:06:11.760 15:48:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.760 ************************************ 00:06:11.760 END TEST accel_wrong_workload 00:06:11.760 ************************************ 00:06:11.760 15:48:23 -- common/autotest_common.sh@10 -- # set +x 00:06:11.760 15:48:23 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:11.760 15:48:23 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:11.760 15:48:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.760 15:48:23 -- common/autotest_common.sh@10 -- # set +x 00:06:11.760 ************************************ 00:06:11.760 START TEST accel_negative_buffers 00:06:11.760 ************************************ 00:06:11.760 15:48:23 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:11.760 15:48:23 -- common/autotest_common.sh@650 -- # local es=0 00:06:11.760 15:48:23 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:11.760 15:48:23 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:11.760 15:48:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.760 15:48:23 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:11.760 15:48:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.760 15:48:23 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:11.760 15:48:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:11.760 15:48:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.760 15:48:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.760 15:48:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.760 15:48:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.760 15:48:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.760 15:48:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.760 15:48:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.760 15:48:23 -- accel/accel.sh@42 -- # jq -r . 00:06:11.760 -x option must be non-negative. 00:06:11.760 [2024-11-29 15:48:23.141613] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:11.760 accel_perf options: 00:06:11.760 [-h help message] 00:06:11.760 [-q queue depth per core] 00:06:11.760 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:11.760 [-T number of threads per core 00:06:11.760 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:11.760 [-t time in seconds] 00:06:11.760 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:11.760 [ dif_verify, , dif_generate, dif_generate_copy 00:06:11.760 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:11.760 [-l for compress/decompress workloads, name of uncompressed input file 00:06:11.760 [-S for crc32c workload, use this seed value (default 0) 00:06:11.760 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:11.760 [-f for fill workload, use this BYTE value (default 255) 00:06:11.760 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:11.760 [-y verify result if this switch is on] 00:06:11.760 [-a tasks to allocate per core (default: same value as -q)] 00:06:11.761 Can be used to spread operations across a wider range of memory. 00:06:11.761 15:48:23 -- common/autotest_common.sh@653 -- # es=1 00:06:11.761 15:48:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.761 15:48:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.761 15:48:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.761 00:06:11.761 real 0m0.049s 00:06:11.761 user 0m0.051s 00:06:11.761 sys 0m0.025s 00:06:11.761 15:48:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.761 15:48:23 -- common/autotest_common.sh@10 -- # set +x 00:06:11.761 ************************************ 00:06:11.761 END TEST accel_negative_buffers 00:06:11.761 ************************************ 00:06:12.019 15:48:23 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:12.019 15:48:23 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:12.019 15:48:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.019 15:48:23 -- common/autotest_common.sh@10 -- # set +x 00:06:12.019 ************************************ 00:06:12.019 START TEST accel_crc32c 00:06:12.019 ************************************ 00:06:12.019 15:48:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:12.019 15:48:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.019 15:48:23 -- accel/accel.sh@17 -- # local accel_module 00:06:12.019 15:48:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:12.019 15:48:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:12.019 15:48:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.019 15:48:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.019 15:48:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.019 15:48:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.019 15:48:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.019 15:48:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.019 15:48:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.019 15:48:23 -- accel/accel.sh@42 -- # jq -r . 00:06:12.019 [2024-11-29 15:48:23.228739] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.019 [2024-11-29 15:48:23.228850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58448 ] 00:06:12.019 [2024-11-29 15:48:23.382174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.278 [2024-11-29 15:48:23.553650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.178 15:48:25 -- accel/accel.sh@18 -- # out=' 00:06:14.178 SPDK Configuration: 00:06:14.178 Core mask: 0x1 00:06:14.178 00:06:14.178 Accel Perf Configuration: 00:06:14.178 Workload Type: crc32c 00:06:14.178 CRC-32C seed: 32 00:06:14.178 Transfer size: 4096 bytes 00:06:14.178 Vector count 1 00:06:14.178 Module: software 00:06:14.178 Queue depth: 32 00:06:14.178 Allocate depth: 32 00:06:14.179 # threads/core: 1 00:06:14.179 Run time: 1 seconds 00:06:14.179 Verify: Yes 00:06:14.179 00:06:14.179 Running for 1 seconds... 00:06:14.179 00:06:14.179 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:14.179 ------------------------------------------------------------------------------------ 00:06:14.179 0,0 459680/s 1795 MiB/s 0 0 00:06:14.179 ==================================================================================== 00:06:14.179 Total 459680/s 1795 MiB/s 0 0' 00:06:14.179 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.179 15:48:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:14.179 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.179 15:48:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:14.179 15:48:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.179 15:48:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.179 15:48:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.179 15:48:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.179 15:48:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.179 15:48:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.179 15:48:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.179 15:48:25 -- accel/accel.sh@42 -- # jq -r . 00:06:14.179 [2024-11-29 15:48:25.219864] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.179 [2024-11-29 15:48:25.220096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58473 ] 00:06:14.179 [2024-11-29 15:48:25.364800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.179 [2024-11-29 15:48:25.507197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val= 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val= 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val=0x1 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val= 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val= 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val=crc32c 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val=32 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val= 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val=software 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val=32 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val=32 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val=1 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val=Yes 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val= 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:14.437 15:48:25 -- accel/accel.sh@21 -- # val= 00:06:14.437 15:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:14.437 15:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:15.810 15:48:27 -- accel/accel.sh@21 -- # val= 00:06:15.810 15:48:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.810 15:48:27 -- accel/accel.sh@20 -- # IFS=: 00:06:15.810 15:48:27 -- accel/accel.sh@20 -- # read -r var val 00:06:15.810 15:48:27 -- accel/accel.sh@21 -- # val= 00:06:15.810 15:48:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.810 15:48:27 -- accel/accel.sh@20 -- # IFS=: 00:06:15.810 15:48:27 -- accel/accel.sh@20 -- # read -r var val 00:06:15.810 15:48:27 -- accel/accel.sh@21 -- # val= 00:06:15.810 15:48:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.810 15:48:27 -- accel/accel.sh@20 -- # IFS=: 00:06:15.810 15:48:27 -- accel/accel.sh@20 -- # read -r var val 00:06:15.810 15:48:27 -- accel/accel.sh@21 -- # val= 00:06:15.810 15:48:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.810 15:48:27 -- accel/accel.sh@20 -- # IFS=: 00:06:15.810 15:48:27 -- accel/accel.sh@20 -- # read -r var val 00:06:15.810 15:48:27 -- accel/accel.sh@21 -- # val= 00:06:15.810 15:48:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.810 15:48:27 -- accel/accel.sh@20 -- # IFS=: 00:06:15.810 15:48:27 -- accel/accel.sh@20 -- # read -r var val 00:06:15.810 15:48:27 -- accel/accel.sh@21 -- # val= 00:06:15.810 15:48:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.810 15:48:27 -- accel/accel.sh@20 -- # IFS=: 00:06:15.810 15:48:27 -- accel/accel.sh@20 -- # read -r var val 00:06:15.810 ************************************ 00:06:15.810 END TEST accel_crc32c 00:06:15.810 ************************************ 00:06:15.810 15:48:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:15.810 15:48:27 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:15.810 15:48:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.810 00:06:15.810 real 0m3.909s 00:06:15.810 user 0m3.472s 00:06:15.810 sys 0m0.235s 00:06:15.810 15:48:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.810 15:48:27 -- common/autotest_common.sh@10 -- # set +x 00:06:15.810 15:48:27 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:15.810 15:48:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:15.810 15:48:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.810 15:48:27 -- common/autotest_common.sh@10 -- # set +x 00:06:15.810 ************************************ 00:06:15.810 START TEST accel_crc32c_C2 00:06:15.810 ************************************ 00:06:15.810 15:48:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:15.810 15:48:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.810 15:48:27 -- accel/accel.sh@17 -- # local accel_module 00:06:15.810 15:48:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:15.810 15:48:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:15.810 15:48:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.810 15:48:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.810 15:48:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.810 15:48:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.810 15:48:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.810 15:48:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.810 15:48:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.810 15:48:27 -- accel/accel.sh@42 -- # jq -r . 00:06:15.810 [2024-11-29 15:48:27.177400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.810 [2024-11-29 15:48:27.177505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58510 ] 00:06:16.068 [2024-11-29 15:48:27.323825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.068 [2024-11-29 15:48:27.494082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.979 15:48:29 -- accel/accel.sh@18 -- # out=' 00:06:17.979 SPDK Configuration: 00:06:17.979 Core mask: 0x1 00:06:17.979 00:06:17.979 Accel Perf Configuration: 00:06:17.979 Workload Type: crc32c 00:06:17.979 CRC-32C seed: 0 00:06:17.979 Transfer size: 4096 bytes 00:06:17.979 Vector count 2 00:06:17.979 Module: software 00:06:17.979 Queue depth: 32 00:06:17.979 Allocate depth: 32 00:06:17.979 # threads/core: 1 00:06:17.979 Run time: 1 seconds 00:06:17.979 Verify: Yes 00:06:17.979 00:06:17.979 Running for 1 seconds... 00:06:17.979 00:06:17.979 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:17.979 ------------------------------------------------------------------------------------ 00:06:17.979 0,0 389088/s 3039 MiB/s 0 0 00:06:17.979 ==================================================================================== 00:06:17.979 Total 389088/s 1519 MiB/s 0 0' 00:06:17.979 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:17.979 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:17.979 15:48:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:17.979 15:48:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:17.979 15:48:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.979 15:48:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.979 15:48:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.979 15:48:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.979 15:48:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.979 15:48:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.979 15:48:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.979 15:48:29 -- accel/accel.sh@42 -- # jq -r . 00:06:17.979 [2024-11-29 15:48:29.204737] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.979 [2024-11-29 15:48:29.204833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58536 ] 00:06:17.979 [2024-11-29 15:48:29.347710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.238 [2024-11-29 15:48:29.491946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.238 15:48:29 -- accel/accel.sh@21 -- # val= 00:06:18.238 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.238 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.238 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.238 15:48:29 -- accel/accel.sh@21 -- # val= 00:06:18.238 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.238 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val=0x1 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val= 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val= 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val=crc32c 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val=0 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val= 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val=software 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val=32 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val=32 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val=1 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val=Yes 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val= 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.239 15:48:29 -- accel/accel.sh@21 -- # val= 00:06:18.239 15:48:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.239 15:48:29 -- accel/accel.sh@20 -- # read -r var val 00:06:20.140 15:48:31 -- accel/accel.sh@21 -- # val= 00:06:20.140 15:48:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.140 15:48:31 -- accel/accel.sh@20 -- # IFS=: 00:06:20.140 15:48:31 -- accel/accel.sh@20 -- # read -r var val 00:06:20.140 15:48:31 -- accel/accel.sh@21 -- # val= 00:06:20.140 15:48:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.140 15:48:31 -- accel/accel.sh@20 -- # IFS=: 00:06:20.140 15:48:31 -- accel/accel.sh@20 -- # read -r var val 00:06:20.140 15:48:31 -- accel/accel.sh@21 -- # val= 00:06:20.140 15:48:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.140 15:48:31 -- accel/accel.sh@20 -- # IFS=: 00:06:20.140 15:48:31 -- accel/accel.sh@20 -- # read -r var val 00:06:20.140 15:48:31 -- accel/accel.sh@21 -- # val= 00:06:20.140 15:48:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.140 15:48:31 -- accel/accel.sh@20 -- # IFS=: 00:06:20.140 15:48:31 -- accel/accel.sh@20 -- # read -r var val 00:06:20.140 15:48:31 -- accel/accel.sh@21 -- # val= 00:06:20.140 15:48:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.140 15:48:31 -- accel/accel.sh@20 -- # IFS=: 00:06:20.140 15:48:31 -- accel/accel.sh@20 -- # read -r var val 00:06:20.140 15:48:31 -- accel/accel.sh@21 -- # val= 00:06:20.140 15:48:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.140 15:48:31 -- accel/accel.sh@20 -- # IFS=: 00:06:20.140 15:48:31 -- accel/accel.sh@20 -- # read -r var val 00:06:20.141 15:48:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.141 15:48:31 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:20.141 15:48:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.141 00:06:20.141 real 0m3.931s 00:06:20.141 user 0m3.498s 00:06:20.141 sys 0m0.230s 00:06:20.141 15:48:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.141 15:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:20.141 ************************************ 00:06:20.141 END TEST accel_crc32c_C2 00:06:20.141 ************************************ 00:06:20.141 15:48:31 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:20.141 15:48:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:20.141 15:48:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.141 15:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:20.141 ************************************ 00:06:20.141 START TEST accel_copy 00:06:20.141 ************************************ 00:06:20.141 15:48:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:20.141 15:48:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.141 15:48:31 -- accel/accel.sh@17 -- # local accel_module 00:06:20.141 15:48:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:20.141 15:48:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:20.141 15:48:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.141 15:48:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.141 15:48:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.141 15:48:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.141 15:48:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.141 15:48:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.141 15:48:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.141 15:48:31 -- accel/accel.sh@42 -- # jq -r . 00:06:20.141 [2024-11-29 15:48:31.146627] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.141 [2024-11-29 15:48:31.146735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58571 ] 00:06:20.141 [2024-11-29 15:48:31.295089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.141 [2024-11-29 15:48:31.477848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.044 15:48:33 -- accel/accel.sh@18 -- # out=' 00:06:22.044 SPDK Configuration: 00:06:22.044 Core mask: 0x1 00:06:22.044 00:06:22.044 Accel Perf Configuration: 00:06:22.044 Workload Type: copy 00:06:22.044 Transfer size: 4096 bytes 00:06:22.044 Vector count 1 00:06:22.044 Module: software 00:06:22.044 Queue depth: 32 00:06:22.044 Allocate depth: 32 00:06:22.044 # threads/core: 1 00:06:22.044 Run time: 1 seconds 00:06:22.044 Verify: Yes 00:06:22.044 00:06:22.044 Running for 1 seconds... 00:06:22.044 00:06:22.044 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.044 ------------------------------------------------------------------------------------ 00:06:22.044 0,0 285408/s 1114 MiB/s 0 0 00:06:22.044 ==================================================================================== 00:06:22.044 Total 285408/s 1114 MiB/s 0 0' 00:06:22.044 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.044 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.044 15:48:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:22.044 15:48:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:22.044 15:48:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.044 15:48:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.044 15:48:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.044 15:48:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.044 15:48:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.044 15:48:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.044 15:48:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.044 15:48:33 -- accel/accel.sh@42 -- # jq -r . 00:06:22.044 [2024-11-29 15:48:33.261808] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.045 [2024-11-29 15:48:33.261911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58602 ] 00:06:22.045 [2024-11-29 15:48:33.410631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.303 [2024-11-29 15:48:33.591818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val= 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val= 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val=0x1 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val= 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val= 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val=copy 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val= 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val=software 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val=32 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val=32 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val=1 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val=Yes 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val= 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.563 15:48:33 -- accel/accel.sh@21 -- # val= 00:06:22.563 15:48:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.563 15:48:33 -- accel/accel.sh@20 -- # read -r var val 00:06:23.937 15:48:35 -- accel/accel.sh@21 -- # val= 00:06:23.937 15:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.937 15:48:35 -- accel/accel.sh@20 -- # IFS=: 00:06:23.937 15:48:35 -- accel/accel.sh@20 -- # read -r var val 00:06:23.937 15:48:35 -- accel/accel.sh@21 -- # val= 00:06:23.937 15:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.937 15:48:35 -- accel/accel.sh@20 -- # IFS=: 00:06:23.937 15:48:35 -- accel/accel.sh@20 -- # read -r var val 00:06:23.937 15:48:35 -- accel/accel.sh@21 -- # val= 00:06:23.937 15:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.937 15:48:35 -- accel/accel.sh@20 -- # IFS=: 00:06:23.937 15:48:35 -- accel/accel.sh@20 -- # read -r var val 00:06:23.937 15:48:35 -- accel/accel.sh@21 -- # val= 00:06:23.937 15:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.937 15:48:35 -- accel/accel.sh@20 -- # IFS=: 00:06:23.937 15:48:35 -- accel/accel.sh@20 -- # read -r var val 00:06:23.937 15:48:35 -- accel/accel.sh@21 -- # val= 00:06:23.937 15:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.937 15:48:35 -- accel/accel.sh@20 -- # IFS=: 00:06:23.937 15:48:35 -- accel/accel.sh@20 -- # read -r var val 00:06:23.937 15:48:35 -- accel/accel.sh@21 -- # val= 00:06:23.937 15:48:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.937 15:48:35 -- accel/accel.sh@20 -- # IFS=: 00:06:23.937 15:48:35 -- accel/accel.sh@20 -- # read -r var val 00:06:23.937 ************************************ 00:06:23.937 END TEST accel_copy 00:06:23.937 ************************************ 00:06:23.937 15:48:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.937 15:48:35 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:23.937 15:48:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.937 00:06:23.937 real 0m4.237s 00:06:23.937 user 0m3.782s 00:06:23.937 sys 0m0.248s 00:06:23.937 15:48:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.937 15:48:35 -- common/autotest_common.sh@10 -- # set +x 00:06:24.196 15:48:35 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.196 15:48:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:24.196 15:48:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.196 15:48:35 -- common/autotest_common.sh@10 -- # set +x 00:06:24.196 ************************************ 00:06:24.196 START TEST accel_fill 00:06:24.196 ************************************ 00:06:24.196 15:48:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.196 15:48:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.196 15:48:35 -- accel/accel.sh@17 -- # local accel_module 00:06:24.196 15:48:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.196 15:48:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.196 15:48:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.196 15:48:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.196 15:48:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.196 15:48:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.196 15:48:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.196 15:48:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.196 15:48:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.196 15:48:35 -- accel/accel.sh@42 -- # jq -r . 00:06:24.196 [2024-11-29 15:48:35.423953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.196 [2024-11-29 15:48:35.424081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58644 ] 00:06:24.196 [2024-11-29 15:48:35.572853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.454 [2024-11-29 15:48:35.753852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.394 15:48:37 -- accel/accel.sh@18 -- # out=' 00:06:26.394 SPDK Configuration: 00:06:26.394 Core mask: 0x1 00:06:26.394 00:06:26.394 Accel Perf Configuration: 00:06:26.394 Workload Type: fill 00:06:26.394 Fill pattern: 0x80 00:06:26.394 Transfer size: 4096 bytes 00:06:26.394 Vector count 1 00:06:26.394 Module: software 00:06:26.394 Queue depth: 64 00:06:26.394 Allocate depth: 64 00:06:26.394 # threads/core: 1 00:06:26.394 Run time: 1 seconds 00:06:26.394 Verify: Yes 00:06:26.394 00:06:26.394 Running for 1 seconds... 00:06:26.394 00:06:26.394 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:26.394 ------------------------------------------------------------------------------------ 00:06:26.394 0,0 456768/s 1784 MiB/s 0 0 00:06:26.394 ==================================================================================== 00:06:26.394 Total 456768/s 1784 MiB/s 0 0' 00:06:26.394 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.394 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.394 15:48:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.394 15:48:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.394 15:48:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.394 15:48:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.394 15:48:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.394 15:48:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.394 15:48:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.394 15:48:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.394 15:48:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.394 15:48:37 -- accel/accel.sh@42 -- # jq -r . 00:06:26.394 [2024-11-29 15:48:37.422605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.394 [2024-11-29 15:48:37.422711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58670 ] 00:06:26.394 [2024-11-29 15:48:37.571176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.394 [2024-11-29 15:48:37.721800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val= 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val= 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val=0x1 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val= 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val= 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val=fill 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val=0x80 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val= 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val=software 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val=64 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val=64 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val=1 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val=Yes 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val= 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:26.653 15:48:37 -- accel/accel.sh@21 -- # val= 00:06:26.653 15:48:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # IFS=: 00:06:26.653 15:48:37 -- accel/accel.sh@20 -- # read -r var val 00:06:28.030 15:48:39 -- accel/accel.sh@21 -- # val= 00:06:28.030 15:48:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.030 15:48:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.030 15:48:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.030 15:48:39 -- accel/accel.sh@21 -- # val= 00:06:28.030 15:48:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.031 15:48:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.031 15:48:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.031 15:48:39 -- accel/accel.sh@21 -- # val= 00:06:28.031 15:48:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.031 15:48:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.031 15:48:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.031 15:48:39 -- accel/accel.sh@21 -- # val= 00:06:28.031 15:48:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.031 15:48:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.031 15:48:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.031 15:48:39 -- accel/accel.sh@21 -- # val= 00:06:28.031 15:48:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.031 15:48:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.031 15:48:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.031 15:48:39 -- accel/accel.sh@21 -- # val= 00:06:28.031 15:48:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.031 15:48:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.031 15:48:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.031 15:48:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.031 15:48:39 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:28.031 15:48:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.031 00:06:28.031 real 0m3.916s 00:06:28.031 user 0m3.478s 00:06:28.031 sys 0m0.226s 00:06:28.031 15:48:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.031 ************************************ 00:06:28.031 END TEST accel_fill 00:06:28.031 ************************************ 00:06:28.031 15:48:39 -- common/autotest_common.sh@10 -- # set +x 00:06:28.031 15:48:39 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:28.031 15:48:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:28.031 15:48:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.031 15:48:39 -- common/autotest_common.sh@10 -- # set +x 00:06:28.031 ************************************ 00:06:28.031 START TEST accel_copy_crc32c 00:06:28.031 ************************************ 00:06:28.031 15:48:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:28.031 15:48:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.031 15:48:39 -- accel/accel.sh@17 -- # local accel_module 00:06:28.031 15:48:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:28.031 15:48:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:28.031 15:48:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.031 15:48:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.031 15:48:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.031 15:48:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.031 15:48:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.031 15:48:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.031 15:48:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.031 15:48:39 -- accel/accel.sh@42 -- # jq -r . 00:06:28.031 [2024-11-29 15:48:39.380620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.031 [2024-11-29 15:48:39.380724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58710 ] 00:06:28.289 [2024-11-29 15:48:39.528054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.289 [2024-11-29 15:48:39.680203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.189 15:48:41 -- accel/accel.sh@18 -- # out=' 00:06:30.189 SPDK Configuration: 00:06:30.189 Core mask: 0x1 00:06:30.189 00:06:30.189 Accel Perf Configuration: 00:06:30.189 Workload Type: copy_crc32c 00:06:30.189 CRC-32C seed: 0 00:06:30.189 Vector size: 4096 bytes 00:06:30.189 Transfer size: 4096 bytes 00:06:30.189 Vector count 1 00:06:30.189 Module: software 00:06:30.189 Queue depth: 32 00:06:30.189 Allocate depth: 32 00:06:30.189 # threads/core: 1 00:06:30.189 Run time: 1 seconds 00:06:30.189 Verify: Yes 00:06:30.189 00:06:30.189 Running for 1 seconds... 00:06:30.189 00:06:30.189 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.189 ------------------------------------------------------------------------------------ 00:06:30.189 0,0 300512/s 1173 MiB/s 0 0 00:06:30.189 ==================================================================================== 00:06:30.189 Total 300512/s 1173 MiB/s 0 0' 00:06:30.189 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.189 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.189 15:48:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:30.189 15:48:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:30.189 15:48:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.189 15:48:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.189 15:48:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.189 15:48:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.189 15:48:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.189 15:48:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.189 15:48:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.189 15:48:41 -- accel/accel.sh@42 -- # jq -r . 00:06:30.189 [2024-11-29 15:48:41.298296] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.189 [2024-11-29 15:48:41.298400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58731 ] 00:06:30.189 [2024-11-29 15:48:41.439159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.189 [2024-11-29 15:48:41.580597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val= 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val= 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val=0x1 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val= 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val= 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val=0 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val= 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val=software 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val=32 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val=32 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val=1 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val=Yes 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val= 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:30.447 15:48:41 -- accel/accel.sh@21 -- # val= 00:06:30.447 15:48:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # IFS=: 00:06:30.447 15:48:41 -- accel/accel.sh@20 -- # read -r var val 00:06:31.821 15:48:43 -- accel/accel.sh@21 -- # val= 00:06:31.821 15:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.821 15:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:31.821 15:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:31.821 15:48:43 -- accel/accel.sh@21 -- # val= 00:06:31.821 15:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.821 15:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:31.821 15:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:31.821 15:48:43 -- accel/accel.sh@21 -- # val= 00:06:31.821 15:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.821 15:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:31.821 15:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:31.821 15:48:43 -- accel/accel.sh@21 -- # val= 00:06:31.821 15:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.821 15:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:31.821 15:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:31.821 15:48:43 -- accel/accel.sh@21 -- # val= 00:06:31.821 15:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.821 15:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:31.821 15:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:31.821 15:48:43 -- accel/accel.sh@21 -- # val= 00:06:31.821 15:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.821 15:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:31.821 15:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:31.821 15:48:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.821 15:48:43 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:31.821 15:48:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.821 00:06:31.821 real 0m3.813s 00:06:31.821 user 0m3.387s 00:06:31.821 sys 0m0.226s 00:06:31.821 ************************************ 00:06:31.821 END TEST accel_copy_crc32c 00:06:31.821 ************************************ 00:06:31.821 15:48:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.821 15:48:43 -- common/autotest_common.sh@10 -- # set +x 00:06:31.821 15:48:43 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:31.821 15:48:43 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:31.821 15:48:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.821 15:48:43 -- common/autotest_common.sh@10 -- # set +x 00:06:31.821 ************************************ 00:06:31.821 START TEST accel_copy_crc32c_C2 00:06:31.821 ************************************ 00:06:31.821 15:48:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:31.821 15:48:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.821 15:48:43 -- accel/accel.sh@17 -- # local accel_module 00:06:31.821 15:48:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:31.821 15:48:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:31.821 15:48:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.821 15:48:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.821 15:48:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.821 15:48:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.821 15:48:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.821 15:48:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.821 15:48:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.821 15:48:43 -- accel/accel.sh@42 -- # jq -r . 00:06:31.821 [2024-11-29 15:48:43.237655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.821 [2024-11-29 15:48:43.237757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58772 ] 00:06:32.080 [2024-11-29 15:48:43.384074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.338 [2024-11-29 15:48:43.526740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.713 15:48:45 -- accel/accel.sh@18 -- # out=' 00:06:33.713 SPDK Configuration: 00:06:33.713 Core mask: 0x1 00:06:33.713 00:06:33.713 Accel Perf Configuration: 00:06:33.713 Workload Type: copy_crc32c 00:06:33.713 CRC-32C seed: 0 00:06:33.713 Vector size: 4096 bytes 00:06:33.713 Transfer size: 8192 bytes 00:06:33.713 Vector count 2 00:06:33.713 Module: software 00:06:33.713 Queue depth: 32 00:06:33.713 Allocate depth: 32 00:06:33.713 # threads/core: 1 00:06:33.713 Run time: 1 seconds 00:06:33.713 Verify: Yes 00:06:33.713 00:06:33.713 Running for 1 seconds... 00:06:33.713 00:06:33.713 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.713 ------------------------------------------------------------------------------------ 00:06:33.713 0,0 232032/s 1812 MiB/s 0 0 00:06:33.713 ==================================================================================== 00:06:33.713 Total 232032/s 906 MiB/s 0 0' 00:06:33.713 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:33.713 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:33.713 15:48:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:33.713 15:48:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:33.713 15:48:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.713 15:48:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.713 15:48:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.713 15:48:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.713 15:48:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.713 15:48:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.713 15:48:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.713 15:48:45 -- accel/accel.sh@42 -- # jq -r . 00:06:33.971 [2024-11-29 15:48:45.144794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.971 [2024-11-29 15:48:45.145018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58798 ] 00:06:33.971 [2024-11-29 15:48:45.292651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.273 [2024-11-29 15:48:45.465189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val= 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val= 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val=0x1 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val= 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val= 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val=0 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val= 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val=software 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val=32 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val=32 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val=1 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val=Yes 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val= 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:34.273 15:48:45 -- accel/accel.sh@21 -- # val= 00:06:34.273 15:48:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:34.273 15:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:36.171 15:48:47 -- accel/accel.sh@21 -- # val= 00:06:36.171 15:48:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.171 15:48:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.171 15:48:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.172 15:48:47 -- accel/accel.sh@21 -- # val= 00:06:36.172 15:48:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.172 15:48:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.172 15:48:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.172 15:48:47 -- accel/accel.sh@21 -- # val= 00:06:36.172 15:48:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.172 15:48:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.172 15:48:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.172 15:48:47 -- accel/accel.sh@21 -- # val= 00:06:36.172 15:48:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.172 15:48:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.172 15:48:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.172 15:48:47 -- accel/accel.sh@21 -- # val= 00:06:36.172 15:48:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.172 15:48:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.172 15:48:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.172 15:48:47 -- accel/accel.sh@21 -- # val= 00:06:36.172 15:48:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.172 15:48:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.172 15:48:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.172 15:48:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.172 15:48:47 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:36.172 15:48:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.172 00:06:36.172 real 0m3.909s 00:06:36.172 user 0m3.467s 00:06:36.172 sys 0m0.236s 00:06:36.172 ************************************ 00:06:36.172 END TEST accel_copy_crc32c_C2 00:06:36.172 ************************************ 00:06:36.172 15:48:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.172 15:48:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.172 15:48:47 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:36.172 15:48:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:36.172 15:48:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.172 15:48:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.172 ************************************ 00:06:36.172 START TEST accel_dualcast 00:06:36.172 ************************************ 00:06:36.172 15:48:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:36.172 15:48:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.172 15:48:47 -- accel/accel.sh@17 -- # local accel_module 00:06:36.172 15:48:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:36.172 15:48:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:36.172 15:48:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.172 15:48:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.172 15:48:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.172 15:48:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.172 15:48:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.172 15:48:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.172 15:48:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.172 15:48:47 -- accel/accel.sh@42 -- # jq -r . 00:06:36.172 [2024-11-29 15:48:47.179429] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.172 [2024-11-29 15:48:47.179509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58839 ] 00:06:36.172 [2024-11-29 15:48:47.310302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.172 [2024-11-29 15:48:47.449490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.075 15:48:49 -- accel/accel.sh@18 -- # out=' 00:06:38.075 SPDK Configuration: 00:06:38.075 Core mask: 0x1 00:06:38.075 00:06:38.075 Accel Perf Configuration: 00:06:38.075 Workload Type: dualcast 00:06:38.075 Transfer size: 4096 bytes 00:06:38.075 Vector count 1 00:06:38.075 Module: software 00:06:38.075 Queue depth: 32 00:06:38.075 Allocate depth: 32 00:06:38.075 # threads/core: 1 00:06:38.075 Run time: 1 seconds 00:06:38.075 Verify: Yes 00:06:38.075 00:06:38.075 Running for 1 seconds... 00:06:38.075 00:06:38.075 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.075 ------------------------------------------------------------------------------------ 00:06:38.075 0,0 436512/s 1705 MiB/s 0 0 00:06:38.075 ==================================================================================== 00:06:38.075 Total 436512/s 1705 MiB/s 0 0' 00:06:38.075 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.075 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.075 15:48:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:38.075 15:48:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:38.075 15:48:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.075 15:48:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.075 15:48:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.075 15:48:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.075 15:48:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.075 15:48:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.075 15:48:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.075 15:48:49 -- accel/accel.sh@42 -- # jq -r . 00:06:38.075 [2024-11-29 15:48:49.095146] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.075 [2024-11-29 15:48:49.095614] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58860 ] 00:06:38.075 [2024-11-29 15:48:49.243289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.075 [2024-11-29 15:48:49.395917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val= 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val= 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val=0x1 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val= 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val= 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val=dualcast 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val= 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val=software 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val=32 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val=32 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val=1 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val=Yes 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val= 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:38.335 15:48:49 -- accel/accel.sh@21 -- # val= 00:06:38.335 15:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:38.335 15:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:39.712 15:48:50 -- accel/accel.sh@21 -- # val= 00:06:39.712 15:48:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.712 15:48:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.712 15:48:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.712 15:48:50 -- accel/accel.sh@21 -- # val= 00:06:39.712 15:48:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.712 15:48:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.712 15:48:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.712 15:48:50 -- accel/accel.sh@21 -- # val= 00:06:39.712 15:48:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.712 15:48:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.712 15:48:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.712 15:48:50 -- accel/accel.sh@21 -- # val= 00:06:39.712 15:48:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.712 15:48:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.712 15:48:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.712 15:48:50 -- accel/accel.sh@21 -- # val= 00:06:39.712 15:48:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.712 15:48:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.712 15:48:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.712 15:48:50 -- accel/accel.sh@21 -- # val= 00:06:39.712 15:48:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.712 15:48:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.712 15:48:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.712 15:48:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:39.712 15:48:50 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:39.712 15:48:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.712 00:06:39.712 real 0m3.833s 00:06:39.712 user 0m3.398s 00:06:39.712 sys 0m0.224s 00:06:39.712 15:48:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.712 15:48:50 -- common/autotest_common.sh@10 -- # set +x 00:06:39.712 ************************************ 00:06:39.712 END TEST accel_dualcast 00:06:39.712 ************************************ 00:06:39.712 15:48:51 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:39.712 15:48:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:39.712 15:48:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.712 15:48:51 -- common/autotest_common.sh@10 -- # set +x 00:06:39.712 ************************************ 00:06:39.712 START TEST accel_compare 00:06:39.712 ************************************ 00:06:39.712 15:48:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:39.712 15:48:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.712 15:48:51 -- accel/accel.sh@17 -- # local accel_module 00:06:39.712 15:48:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:39.712 15:48:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:39.712 15:48:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.712 15:48:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.712 15:48:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.712 15:48:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.712 15:48:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.712 15:48:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.712 15:48:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.712 15:48:51 -- accel/accel.sh@42 -- # jq -r . 00:06:39.712 [2024-11-29 15:48:51.064614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.713 [2024-11-29 15:48:51.064730] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58901 ] 00:06:39.971 [2024-11-29 15:48:51.209148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.971 [2024-11-29 15:48:51.350260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.870 15:48:52 -- accel/accel.sh@18 -- # out=' 00:06:41.870 SPDK Configuration: 00:06:41.870 Core mask: 0x1 00:06:41.870 00:06:41.870 Accel Perf Configuration: 00:06:41.870 Workload Type: compare 00:06:41.870 Transfer size: 4096 bytes 00:06:41.871 Vector count 1 00:06:41.871 Module: software 00:06:41.871 Queue depth: 32 00:06:41.871 Allocate depth: 32 00:06:41.871 # threads/core: 1 00:06:41.871 Run time: 1 seconds 00:06:41.871 Verify: Yes 00:06:41.871 00:06:41.871 Running for 1 seconds... 00:06:41.871 00:06:41.871 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:41.871 ------------------------------------------------------------------------------------ 00:06:41.871 0,0 561408/s 2193 MiB/s 0 0 00:06:41.871 ==================================================================================== 00:06:41.871 Total 561408/s 2193 MiB/s 0 0' 00:06:41.871 15:48:52 -- accel/accel.sh@20 -- # IFS=: 00:06:41.871 15:48:52 -- accel/accel.sh@20 -- # read -r var val 00:06:41.871 15:48:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:41.871 15:48:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:41.871 15:48:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.871 15:48:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.871 15:48:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.871 15:48:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.871 15:48:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.871 15:48:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.871 15:48:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.871 15:48:52 -- accel/accel.sh@42 -- # jq -r . 00:06:41.871 [2024-11-29 15:48:52.972551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.871 [2024-11-29 15:48:52.972841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58929 ] 00:06:41.871 [2024-11-29 15:48:53.127108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.871 [2024-11-29 15:48:53.269434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.129 15:48:53 -- accel/accel.sh@21 -- # val= 00:06:42.129 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.129 15:48:53 -- accel/accel.sh@21 -- # val= 00:06:42.129 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.129 15:48:53 -- accel/accel.sh@21 -- # val=0x1 00:06:42.129 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.129 15:48:53 -- accel/accel.sh@21 -- # val= 00:06:42.129 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.129 15:48:53 -- accel/accel.sh@21 -- # val= 00:06:42.129 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.129 15:48:53 -- accel/accel.sh@21 -- # val=compare 00:06:42.129 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.129 15:48:53 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.129 15:48:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.129 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.129 15:48:53 -- accel/accel.sh@21 -- # val= 00:06:42.129 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.129 15:48:53 -- accel/accel.sh@21 -- # val=software 00:06:42.129 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.129 15:48:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.129 15:48:53 -- accel/accel.sh@21 -- # val=32 00:06:42.129 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.129 15:48:53 -- accel/accel.sh@21 -- # val=32 00:06:42.129 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.129 15:48:53 -- accel/accel.sh@21 -- # val=1 00:06:42.129 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.129 15:48:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.129 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.129 15:48:53 -- accel/accel.sh@21 -- # val=Yes 00:06:42.129 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.129 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.130 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.130 15:48:53 -- accel/accel.sh@21 -- # val= 00:06:42.130 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.130 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.130 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.130 15:48:53 -- accel/accel.sh@21 -- # val= 00:06:42.130 15:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.130 15:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.130 15:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:43.570 15:48:54 -- accel/accel.sh@21 -- # val= 00:06:43.570 15:48:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.570 15:48:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.570 15:48:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.570 15:48:54 -- accel/accel.sh@21 -- # val= 00:06:43.570 15:48:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.570 15:48:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.570 15:48:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.570 15:48:54 -- accel/accel.sh@21 -- # val= 00:06:43.570 15:48:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.570 15:48:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.570 15:48:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.570 15:48:54 -- accel/accel.sh@21 -- # val= 00:06:43.570 15:48:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.570 15:48:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.570 15:48:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.570 15:48:54 -- accel/accel.sh@21 -- # val= 00:06:43.570 15:48:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.570 15:48:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.570 15:48:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.570 15:48:54 -- accel/accel.sh@21 -- # val= 00:06:43.570 15:48:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.570 15:48:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.570 15:48:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.570 15:48:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.570 15:48:54 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:43.570 15:48:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.570 00:06:43.570 real 0m3.817s 00:06:43.570 user 0m3.386s 00:06:43.570 sys 0m0.227s 00:06:43.570 15:48:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.570 15:48:54 -- common/autotest_common.sh@10 -- # set +x 00:06:43.570 ************************************ 00:06:43.570 END TEST accel_compare 00:06:43.570 ************************************ 00:06:43.570 15:48:54 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:43.570 15:48:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:43.570 15:48:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.570 15:48:54 -- common/autotest_common.sh@10 -- # set +x 00:06:43.570 ************************************ 00:06:43.570 START TEST accel_xor 00:06:43.570 ************************************ 00:06:43.570 15:48:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:43.570 15:48:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.570 15:48:54 -- accel/accel.sh@17 -- # local accel_module 00:06:43.570 15:48:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:43.570 15:48:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:43.570 15:48:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.570 15:48:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.570 15:48:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.570 15:48:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.570 15:48:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.570 15:48:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.570 15:48:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.570 15:48:54 -- accel/accel.sh@42 -- # jq -r . 00:06:43.570 [2024-11-29 15:48:54.930049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.570 [2024-11-29 15:48:54.930150] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58970 ] 00:06:43.828 [2024-11-29 15:48:55.066773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.828 [2024-11-29 15:48:55.205726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.728 15:48:56 -- accel/accel.sh@18 -- # out=' 00:06:45.728 SPDK Configuration: 00:06:45.728 Core mask: 0x1 00:06:45.728 00:06:45.728 Accel Perf Configuration: 00:06:45.728 Workload Type: xor 00:06:45.728 Source buffers: 2 00:06:45.728 Transfer size: 4096 bytes 00:06:45.728 Vector count 1 00:06:45.728 Module: software 00:06:45.728 Queue depth: 32 00:06:45.728 Allocate depth: 32 00:06:45.728 # threads/core: 1 00:06:45.728 Run time: 1 seconds 00:06:45.728 Verify: Yes 00:06:45.728 00:06:45.728 Running for 1 seconds... 00:06:45.728 00:06:45.728 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.728 ------------------------------------------------------------------------------------ 00:06:45.728 0,0 446912/s 1745 MiB/s 0 0 00:06:45.729 ==================================================================================== 00:06:45.729 Total 446912/s 1745 MiB/s 0 0' 00:06:45.729 15:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:45.729 15:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:45.729 15:48:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:45.729 15:48:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:45.729 15:48:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.729 15:48:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.729 15:48:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.729 15:48:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.729 15:48:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.729 15:48:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.729 15:48:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.729 15:48:56 -- accel/accel.sh@42 -- # jq -r . 00:06:45.729 [2024-11-29 15:48:56.823066] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.729 [2024-11-29 15:48:56.823170] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58996 ] 00:06:45.729 [2024-11-29 15:48:56.971191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.729 [2024-11-29 15:48:57.114515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.987 15:48:57 -- accel/accel.sh@21 -- # val= 00:06:45.987 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.987 15:48:57 -- accel/accel.sh@21 -- # val= 00:06:45.987 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.987 15:48:57 -- accel/accel.sh@21 -- # val=0x1 00:06:45.987 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.987 15:48:57 -- accel/accel.sh@21 -- # val= 00:06:45.987 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.987 15:48:57 -- accel/accel.sh@21 -- # val= 00:06:45.987 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.987 15:48:57 -- accel/accel.sh@21 -- # val=xor 00:06:45.987 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.987 15:48:57 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.987 15:48:57 -- accel/accel.sh@21 -- # val=2 00:06:45.987 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.987 15:48:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.987 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.987 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.988 15:48:57 -- accel/accel.sh@21 -- # val= 00:06:45.988 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.988 15:48:57 -- accel/accel.sh@21 -- # val=software 00:06:45.988 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.988 15:48:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.988 15:48:57 -- accel/accel.sh@21 -- # val=32 00:06:45.988 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.988 15:48:57 -- accel/accel.sh@21 -- # val=32 00:06:45.988 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.988 15:48:57 -- accel/accel.sh@21 -- # val=1 00:06:45.988 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.988 15:48:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.988 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.988 15:48:57 -- accel/accel.sh@21 -- # val=Yes 00:06:45.988 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.988 15:48:57 -- accel/accel.sh@21 -- # val= 00:06:45.988 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:45.988 15:48:57 -- accel/accel.sh@21 -- # val= 00:06:45.988 15:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:45.988 15:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:47.363 15:48:58 -- accel/accel.sh@21 -- # val= 00:06:47.363 15:48:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.363 15:48:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.363 15:48:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.363 15:48:58 -- accel/accel.sh@21 -- # val= 00:06:47.363 15:48:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.363 15:48:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.363 15:48:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.363 15:48:58 -- accel/accel.sh@21 -- # val= 00:06:47.363 15:48:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.363 15:48:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.363 15:48:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.363 15:48:58 -- accel/accel.sh@21 -- # val= 00:06:47.363 15:48:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.363 15:48:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.363 15:48:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.363 15:48:58 -- accel/accel.sh@21 -- # val= 00:06:47.363 15:48:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.363 15:48:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.363 15:48:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.363 15:48:58 -- accel/accel.sh@21 -- # val= 00:06:47.363 15:48:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.363 15:48:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.363 15:48:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.363 15:48:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.363 15:48:58 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:47.363 15:48:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.363 00:06:47.363 real 0m3.801s 00:06:47.363 user 0m3.362s 00:06:47.363 sys 0m0.233s 00:06:47.363 15:48:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.363 15:48:58 -- common/autotest_common.sh@10 -- # set +x 00:06:47.363 ************************************ 00:06:47.363 END TEST accel_xor 00:06:47.363 ************************************ 00:06:47.363 15:48:58 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:47.363 15:48:58 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:47.363 15:48:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.363 15:48:58 -- common/autotest_common.sh@10 -- # set +x 00:06:47.363 ************************************ 00:06:47.363 START TEST accel_xor 00:06:47.363 ************************************ 00:06:47.363 15:48:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:47.363 15:48:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.363 15:48:58 -- accel/accel.sh@17 -- # local accel_module 00:06:47.363 15:48:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:47.363 15:48:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:47.363 15:48:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.363 15:48:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.363 15:48:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.363 15:48:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.363 15:48:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.363 15:48:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.363 15:48:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.363 15:48:58 -- accel/accel.sh@42 -- # jq -r . 00:06:47.363 [2024-11-29 15:48:58.765840] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.363 [2024-11-29 15:48:58.765940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59038 ] 00:06:47.621 [2024-11-29 15:48:58.912008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.879 [2024-11-29 15:48:59.058577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.253 15:49:00 -- accel/accel.sh@18 -- # out=' 00:06:49.253 SPDK Configuration: 00:06:49.253 Core mask: 0x1 00:06:49.253 00:06:49.253 Accel Perf Configuration: 00:06:49.253 Workload Type: xor 00:06:49.253 Source buffers: 3 00:06:49.253 Transfer size: 4096 bytes 00:06:49.253 Vector count 1 00:06:49.253 Module: software 00:06:49.253 Queue depth: 32 00:06:49.253 Allocate depth: 32 00:06:49.253 # threads/core: 1 00:06:49.253 Run time: 1 seconds 00:06:49.253 Verify: Yes 00:06:49.253 00:06:49.253 Running for 1 seconds... 00:06:49.254 00:06:49.254 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.254 ------------------------------------------------------------------------------------ 00:06:49.254 0,0 423296/s 1653 MiB/s 0 0 00:06:49.254 ==================================================================================== 00:06:49.254 Total 423296/s 1653 MiB/s 0 0' 00:06:49.254 15:49:00 -- accel/accel.sh@20 -- # IFS=: 00:06:49.254 15:49:00 -- accel/accel.sh@20 -- # read -r var val 00:06:49.254 15:49:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:49.254 15:49:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:49.254 15:49:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.254 15:49:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.254 15:49:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.254 15:49:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.254 15:49:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.254 15:49:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.254 15:49:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.254 15:49:00 -- accel/accel.sh@42 -- # jq -r . 00:06:49.511 [2024-11-29 15:49:00.687550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.511 [2024-11-29 15:49:00.687811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59058 ] 00:06:49.511 [2024-11-29 15:49:00.835785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.769 [2024-11-29 15:49:00.980664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val= 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val= 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val=0x1 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val= 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val= 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val=xor 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val=3 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val= 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val=software 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val=32 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val=32 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val=1 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val=Yes 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val= 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.769 15:49:01 -- accel/accel.sh@21 -- # val= 00:06:49.769 15:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:49.769 15:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:51.143 15:49:02 -- accel/accel.sh@21 -- # val= 00:06:51.143 15:49:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.143 15:49:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.143 15:49:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.143 15:49:02 -- accel/accel.sh@21 -- # val= 00:06:51.143 15:49:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.143 15:49:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.143 15:49:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.143 15:49:02 -- accel/accel.sh@21 -- # val= 00:06:51.143 15:49:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.143 15:49:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.143 15:49:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.143 15:49:02 -- accel/accel.sh@21 -- # val= 00:06:51.143 15:49:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.143 15:49:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.143 15:49:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.143 15:49:02 -- accel/accel.sh@21 -- # val= 00:06:51.143 15:49:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.143 15:49:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.143 15:49:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.143 15:49:02 -- accel/accel.sh@21 -- # val= 00:06:51.143 15:49:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.143 15:49:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.143 15:49:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.143 15:49:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.143 ************************************ 00:06:51.143 END TEST accel_xor 00:06:51.143 ************************************ 00:06:51.143 15:49:02 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:51.143 15:49:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.143 00:06:51.143 real 0m3.836s 00:06:51.143 user 0m3.385s 00:06:51.143 sys 0m0.245s 00:06:51.143 15:49:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.144 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:06:51.401 15:49:02 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:51.401 15:49:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:51.401 15:49:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.401 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:06:51.401 ************************************ 00:06:51.401 START TEST accel_dif_verify 00:06:51.401 ************************************ 00:06:51.401 15:49:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:51.401 15:49:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.401 15:49:02 -- accel/accel.sh@17 -- # local accel_module 00:06:51.401 15:49:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:51.401 15:49:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:51.401 15:49:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.401 15:49:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.401 15:49:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.401 15:49:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.401 15:49:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.401 15:49:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.401 15:49:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.401 15:49:02 -- accel/accel.sh@42 -- # jq -r . 00:06:51.401 [2024-11-29 15:49:02.642306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.401 [2024-11-29 15:49:02.642408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59100 ] 00:06:51.401 [2024-11-29 15:49:02.787286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.658 [2024-11-29 15:49:02.937511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.568 15:49:04 -- accel/accel.sh@18 -- # out=' 00:06:53.568 SPDK Configuration: 00:06:53.568 Core mask: 0x1 00:06:53.568 00:06:53.568 Accel Perf Configuration: 00:06:53.568 Workload Type: dif_verify 00:06:53.568 Vector size: 4096 bytes 00:06:53.568 Transfer size: 4096 bytes 00:06:53.568 Block size: 512 bytes 00:06:53.568 Metadata size: 8 bytes 00:06:53.568 Vector count 1 00:06:53.568 Module: software 00:06:53.568 Queue depth: 32 00:06:53.568 Allocate depth: 32 00:06:53.568 # threads/core: 1 00:06:53.568 Run time: 1 seconds 00:06:53.568 Verify: No 00:06:53.568 00:06:53.568 Running for 1 seconds... 00:06:53.568 00:06:53.568 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.568 ------------------------------------------------------------------------------------ 00:06:53.568 0,0 128768/s 510 MiB/s 0 0 00:06:53.568 ==================================================================================== 00:06:53.568 Total 128768/s 503 MiB/s 0 0' 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.568 15:49:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:53.568 15:49:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:53.568 15:49:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.568 15:49:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.568 15:49:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.568 15:49:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.568 15:49:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.568 15:49:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.568 15:49:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.568 15:49:04 -- accel/accel.sh@42 -- # jq -r . 00:06:53.568 [2024-11-29 15:49:04.562413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.568 [2024-11-29 15:49:04.562638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59122 ] 00:06:53.568 [2024-11-29 15:49:04.707397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.568 [2024-11-29 15:49:04.855269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.568 15:49:04 -- accel/accel.sh@21 -- # val= 00:06:53.568 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.568 15:49:04 -- accel/accel.sh@21 -- # val= 00:06:53.568 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.568 15:49:04 -- accel/accel.sh@21 -- # val=0x1 00:06:53.568 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.568 15:49:04 -- accel/accel.sh@21 -- # val= 00:06:53.568 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.568 15:49:04 -- accel/accel.sh@21 -- # val= 00:06:53.568 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.568 15:49:04 -- accel/accel.sh@21 -- # val=dif_verify 00:06:53.568 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.568 15:49:04 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.568 15:49:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.568 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.568 15:49:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.568 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.568 15:49:04 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:53.568 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.568 15:49:04 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:53.568 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.568 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.569 15:49:04 -- accel/accel.sh@21 -- # val= 00:06:53.569 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.569 15:49:04 -- accel/accel.sh@21 -- # val=software 00:06:53.569 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.569 15:49:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.569 15:49:04 -- accel/accel.sh@21 -- # val=32 00:06:53.569 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.569 15:49:04 -- accel/accel.sh@21 -- # val=32 00:06:53.569 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.569 15:49:04 -- accel/accel.sh@21 -- # val=1 00:06:53.569 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.569 15:49:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.569 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.569 15:49:04 -- accel/accel.sh@21 -- # val=No 00:06:53.569 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.569 15:49:04 -- accel/accel.sh@21 -- # val= 00:06:53.569 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.569 15:49:04 -- accel/accel.sh@21 -- # val= 00:06:53.569 15:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.569 15:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:55.470 15:49:06 -- accel/accel.sh@21 -- # val= 00:06:55.470 15:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.470 15:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.470 15:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.470 15:49:06 -- accel/accel.sh@21 -- # val= 00:06:55.470 15:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.470 15:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.470 15:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.470 15:49:06 -- accel/accel.sh@21 -- # val= 00:06:55.470 15:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.470 15:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.470 15:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.470 15:49:06 -- accel/accel.sh@21 -- # val= 00:06:55.470 15:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.470 15:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.470 15:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.470 15:49:06 -- accel/accel.sh@21 -- # val= 00:06:55.470 15:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.470 15:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.470 15:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.470 15:49:06 -- accel/accel.sh@21 -- # val= 00:06:55.470 15:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.470 15:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.470 15:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.470 15:49:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.470 15:49:06 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:55.470 15:49:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.470 00:06:55.470 real 0m3.831s 00:06:55.470 user 0m3.401s 00:06:55.470 sys 0m0.228s 00:06:55.470 ************************************ 00:06:55.470 END TEST accel_dif_verify 00:06:55.470 ************************************ 00:06:55.470 15:49:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.470 15:49:06 -- common/autotest_common.sh@10 -- # set +x 00:06:55.470 15:49:06 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:55.470 15:49:06 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:55.470 15:49:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.470 15:49:06 -- common/autotest_common.sh@10 -- # set +x 00:06:55.470 ************************************ 00:06:55.470 START TEST accel_dif_generate 00:06:55.470 ************************************ 00:06:55.470 15:49:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:55.470 15:49:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.470 15:49:06 -- accel/accel.sh@17 -- # local accel_module 00:06:55.470 15:49:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:55.470 15:49:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:55.470 15:49:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.470 15:49:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.470 15:49:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.470 15:49:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.470 15:49:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.470 15:49:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.470 15:49:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.470 15:49:06 -- accel/accel.sh@42 -- # jq -r . 00:06:55.470 [2024-11-29 15:49:06.516709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.470 [2024-11-29 15:49:06.516825] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59163 ] 00:06:55.470 [2024-11-29 15:49:06.666303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.470 [2024-11-29 15:49:06.844121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.372 15:49:08 -- accel/accel.sh@18 -- # out=' 00:06:57.372 SPDK Configuration: 00:06:57.372 Core mask: 0x1 00:06:57.372 00:06:57.372 Accel Perf Configuration: 00:06:57.372 Workload Type: dif_generate 00:06:57.372 Vector size: 4096 bytes 00:06:57.372 Transfer size: 4096 bytes 00:06:57.372 Block size: 512 bytes 00:06:57.372 Metadata size: 8 bytes 00:06:57.372 Vector count 1 00:06:57.372 Module: software 00:06:57.372 Queue depth: 32 00:06:57.372 Allocate depth: 32 00:06:57.372 # threads/core: 1 00:06:57.372 Run time: 1 seconds 00:06:57.372 Verify: No 00:06:57.372 00:06:57.372 Running for 1 seconds... 00:06:57.372 00:06:57.372 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.372 ------------------------------------------------------------------------------------ 00:06:57.372 0,0 119424/s 473 MiB/s 0 0 00:06:57.372 ==================================================================================== 00:06:57.372 Total 119424/s 466 MiB/s 0 0' 00:06:57.372 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.372 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.372 15:49:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:57.372 15:49:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:57.372 15:49:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.372 15:49:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.372 15:49:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.372 15:49:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.372 15:49:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.372 15:49:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.372 15:49:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.372 15:49:08 -- accel/accel.sh@42 -- # jq -r . 00:06:57.372 [2024-11-29 15:49:08.496373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.372 [2024-11-29 15:49:08.496479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59189 ] 00:06:57.372 [2024-11-29 15:49:08.643518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.372 [2024-11-29 15:49:08.789188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val= 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val= 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val=0x1 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val= 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val= 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val=dif_generate 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val= 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val=software 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val=32 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val=32 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val=1 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val=No 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val= 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.631 15:49:08 -- accel/accel.sh@21 -- # val= 00:06:57.631 15:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.631 15:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:59.008 15:49:10 -- accel/accel.sh@21 -- # val= 00:06:59.008 15:49:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.008 15:49:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.008 15:49:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.008 15:49:10 -- accel/accel.sh@21 -- # val= 00:06:59.008 15:49:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.008 15:49:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.008 15:49:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.008 15:49:10 -- accel/accel.sh@21 -- # val= 00:06:59.008 15:49:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.008 15:49:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.008 15:49:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.008 15:49:10 -- accel/accel.sh@21 -- # val= 00:06:59.008 15:49:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.008 15:49:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.008 15:49:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.008 15:49:10 -- accel/accel.sh@21 -- # val= 00:06:59.008 15:49:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.008 15:49:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.008 15:49:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.008 15:49:10 -- accel/accel.sh@21 -- # val= 00:06:59.008 15:49:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.008 15:49:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.008 15:49:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.008 15:49:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.008 15:49:10 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:59.008 ************************************ 00:06:59.008 END TEST accel_dif_generate 00:06:59.008 ************************************ 00:06:59.008 15:49:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.008 00:06:59.008 real 0m3.897s 00:06:59.008 user 0m3.434s 00:06:59.008 sys 0m0.257s 00:06:59.008 15:49:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.008 15:49:10 -- common/autotest_common.sh@10 -- # set +x 00:06:59.008 15:49:10 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:59.008 15:49:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:59.008 15:49:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.008 15:49:10 -- common/autotest_common.sh@10 -- # set +x 00:06:59.008 ************************************ 00:06:59.008 START TEST accel_dif_generate_copy 00:06:59.008 ************************************ 00:06:59.008 15:49:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:59.008 15:49:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.008 15:49:10 -- accel/accel.sh@17 -- # local accel_module 00:06:59.008 15:49:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:59.008 15:49:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:59.008 15:49:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.008 15:49:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.008 15:49:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.008 15:49:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.008 15:49:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.008 15:49:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.008 15:49:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.008 15:49:10 -- accel/accel.sh@42 -- # jq -r . 00:06:59.267 [2024-11-29 15:49:10.453373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.267 [2024-11-29 15:49:10.453490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59230 ] 00:06:59.267 [2024-11-29 15:49:10.600138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.525 [2024-11-29 15:49:10.742990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.899 15:49:12 -- accel/accel.sh@18 -- # out=' 00:07:00.899 SPDK Configuration: 00:07:00.899 Core mask: 0x1 00:07:00.899 00:07:00.899 Accel Perf Configuration: 00:07:00.899 Workload Type: dif_generate_copy 00:07:00.899 Vector size: 4096 bytes 00:07:00.899 Transfer size: 4096 bytes 00:07:00.899 Vector count 1 00:07:00.899 Module: software 00:07:00.899 Queue depth: 32 00:07:00.899 Allocate depth: 32 00:07:00.899 # threads/core: 1 00:07:00.899 Run time: 1 seconds 00:07:00.899 Verify: No 00:07:00.899 00:07:00.899 Running for 1 seconds... 00:07:00.899 00:07:00.899 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.899 ------------------------------------------------------------------------------------ 00:07:00.899 0,0 118144/s 468 MiB/s 0 0 00:07:00.899 ==================================================================================== 00:07:00.899 Total 118144/s 461 MiB/s 0 0' 00:07:00.899 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:00.899 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:00.899 15:49:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:00.899 15:49:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.899 15:49:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.899 15:49:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:00.899 15:49:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.899 15:49:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.899 15:49:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.899 15:49:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.899 15:49:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.899 15:49:12 -- accel/accel.sh@42 -- # jq -r . 00:07:01.158 [2024-11-29 15:49:12.358454] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.158 [2024-11-29 15:49:12.358555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59256 ] 00:07:01.158 [2024-11-29 15:49:12.509078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.416 [2024-11-29 15:49:12.680472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.416 15:49:12 -- accel/accel.sh@21 -- # val= 00:07:01.416 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.416 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.416 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.416 15:49:12 -- accel/accel.sh@21 -- # val= 00:07:01.416 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.416 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.416 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.416 15:49:12 -- accel/accel.sh@21 -- # val=0x1 00:07:01.416 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.416 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.416 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.416 15:49:12 -- accel/accel.sh@21 -- # val= 00:07:01.416 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.416 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.416 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.416 15:49:12 -- accel/accel.sh@21 -- # val= 00:07:01.416 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.416 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.416 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.416 15:49:12 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:01.417 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.417 15:49:12 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.417 15:49:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.417 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.417 15:49:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.417 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.417 15:49:12 -- accel/accel.sh@21 -- # val= 00:07:01.417 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.417 15:49:12 -- accel/accel.sh@21 -- # val=software 00:07:01.417 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.417 15:49:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.417 15:49:12 -- accel/accel.sh@21 -- # val=32 00:07:01.417 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.417 15:49:12 -- accel/accel.sh@21 -- # val=32 00:07:01.417 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.417 15:49:12 -- accel/accel.sh@21 -- # val=1 00:07:01.417 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.417 15:49:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.417 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.417 15:49:12 -- accel/accel.sh@21 -- # val=No 00:07:01.417 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.417 15:49:12 -- accel/accel.sh@21 -- # val= 00:07:01.417 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.417 15:49:12 -- accel/accel.sh@21 -- # val= 00:07:01.417 15:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.417 15:49:12 -- accel/accel.sh@20 -- # read -r var val 00:07:03.341 15:49:14 -- accel/accel.sh@21 -- # val= 00:07:03.341 15:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.341 15:49:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.341 15:49:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.341 15:49:14 -- accel/accel.sh@21 -- # val= 00:07:03.341 15:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.341 15:49:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.341 15:49:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.341 15:49:14 -- accel/accel.sh@21 -- # val= 00:07:03.341 15:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.341 15:49:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.341 15:49:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.341 15:49:14 -- accel/accel.sh@21 -- # val= 00:07:03.341 15:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.341 15:49:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.341 15:49:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.341 15:49:14 -- accel/accel.sh@21 -- # val= 00:07:03.341 15:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.341 15:49:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.341 15:49:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.341 15:49:14 -- accel/accel.sh@21 -- # val= 00:07:03.341 15:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.341 15:49:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.341 15:49:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.341 ************************************ 00:07:03.341 END TEST accel_dif_generate_copy 00:07:03.341 ************************************ 00:07:03.341 15:49:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.341 15:49:14 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:03.341 15:49:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.341 00:07:03.341 real 0m3.873s 00:07:03.341 user 0m3.434s 00:07:03.341 sys 0m0.234s 00:07:03.341 15:49:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.341 15:49:14 -- common/autotest_common.sh@10 -- # set +x 00:07:03.341 15:49:14 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:03.341 15:49:14 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.341 15:49:14 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:03.341 15:49:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.341 15:49:14 -- common/autotest_common.sh@10 -- # set +x 00:07:03.341 ************************************ 00:07:03.341 START TEST accel_comp 00:07:03.341 ************************************ 00:07:03.342 15:49:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.342 15:49:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.342 15:49:14 -- accel/accel.sh@17 -- # local accel_module 00:07:03.342 15:49:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.342 15:49:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.342 15:49:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.342 15:49:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.342 15:49:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.342 15:49:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.342 15:49:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.342 15:49:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.342 15:49:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.342 15:49:14 -- accel/accel.sh@42 -- # jq -r . 00:07:03.342 [2024-11-29 15:49:14.363693] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.342 [2024-11-29 15:49:14.363798] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:07:03.342 [2024-11-29 15:49:14.511898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.342 [2024-11-29 15:49:14.655336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.239 15:49:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:05.239 00:07:05.239 SPDK Configuration: 00:07:05.239 Core mask: 0x1 00:07:05.239 00:07:05.239 Accel Perf Configuration: 00:07:05.239 Workload Type: compress 00:07:05.239 Transfer size: 4096 bytes 00:07:05.239 Vector count 1 00:07:05.239 Module: software 00:07:05.239 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.239 Queue depth: 32 00:07:05.239 Allocate depth: 32 00:07:05.239 # threads/core: 1 00:07:05.239 Run time: 1 seconds 00:07:05.239 Verify: No 00:07:05.239 00:07:05.239 Running for 1 seconds... 00:07:05.239 00:07:05.239 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.239 ------------------------------------------------------------------------------------ 00:07:05.239 0,0 64192/s 267 MiB/s 0 0 00:07:05.239 ==================================================================================== 00:07:05.239 Total 64192/s 250 MiB/s 0 0' 00:07:05.239 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.239 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.239 15:49:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.239 15:49:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.239 15:49:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.239 15:49:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.239 15:49:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.239 15:49:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.239 15:49:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.239 15:49:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.239 15:49:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.239 15:49:16 -- accel/accel.sh@42 -- # jq -r . 00:07:05.239 [2024-11-29 15:49:16.277568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.239 [2024-11-29 15:49:16.277670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59323 ] 00:07:05.239 [2024-11-29 15:49:16.423757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.239 [2024-11-29 15:49:16.568480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val= 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val= 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val= 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val=0x1 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val= 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val= 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val=compress 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val= 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val=software 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val=32 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val=32 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val=1 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val=No 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val= 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.498 15:49:16 -- accel/accel.sh@21 -- # val= 00:07:05.498 15:49:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.498 15:49:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.877 15:49:18 -- accel/accel.sh@21 -- # val= 00:07:06.877 15:49:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.877 15:49:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.877 15:49:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.877 15:49:18 -- accel/accel.sh@21 -- # val= 00:07:06.877 15:49:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.877 15:49:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.877 15:49:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.877 15:49:18 -- accel/accel.sh@21 -- # val= 00:07:06.877 15:49:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.877 15:49:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.877 15:49:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.877 15:49:18 -- accel/accel.sh@21 -- # val= 00:07:06.877 15:49:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.877 15:49:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.877 15:49:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.877 15:49:18 -- accel/accel.sh@21 -- # val= 00:07:06.877 15:49:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.877 15:49:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.877 15:49:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.877 15:49:18 -- accel/accel.sh@21 -- # val= 00:07:06.877 15:49:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.877 15:49:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.877 15:49:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.877 ************************************ 00:07:06.877 END TEST accel_comp 00:07:06.877 ************************************ 00:07:06.877 15:49:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.877 15:49:18 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:06.877 15:49:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.877 00:07:06.877 real 0m3.823s 00:07:06.877 user 0m3.392s 00:07:06.877 sys 0m0.225s 00:07:06.877 15:49:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.877 15:49:18 -- common/autotest_common.sh@10 -- # set +x 00:07:06.877 15:49:18 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:06.877 15:49:18 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:06.877 15:49:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.877 15:49:18 -- common/autotest_common.sh@10 -- # set +x 00:07:06.877 ************************************ 00:07:06.877 START TEST accel_decomp 00:07:06.877 ************************************ 00:07:06.877 15:49:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:06.877 15:49:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.877 15:49:18 -- accel/accel.sh@17 -- # local accel_module 00:07:06.877 15:49:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:06.877 15:49:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:06.877 15:49:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.877 15:49:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.877 15:49:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.877 15:49:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.877 15:49:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.877 15:49:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.877 15:49:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.877 15:49:18 -- accel/accel.sh@42 -- # jq -r . 00:07:06.877 [2024-11-29 15:49:18.236419] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.877 [2024-11-29 15:49:18.236520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59359 ] 00:07:07.136 [2024-11-29 15:49:18.382890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.137 [2024-11-29 15:49:18.528073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.039 15:49:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:09.039 00:07:09.039 SPDK Configuration: 00:07:09.039 Core mask: 0x1 00:07:09.039 00:07:09.039 Accel Perf Configuration: 00:07:09.039 Workload Type: decompress 00:07:09.039 Transfer size: 4096 bytes 00:07:09.039 Vector count 1 00:07:09.039 Module: software 00:07:09.039 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.039 Queue depth: 32 00:07:09.039 Allocate depth: 32 00:07:09.039 # threads/core: 1 00:07:09.039 Run time: 1 seconds 00:07:09.039 Verify: Yes 00:07:09.039 00:07:09.039 Running for 1 seconds... 00:07:09.039 00:07:09.039 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.039 ------------------------------------------------------------------------------------ 00:07:09.039 0,0 81440/s 150 MiB/s 0 0 00:07:09.039 ==================================================================================== 00:07:09.039 Total 81440/s 318 MiB/s 0 0' 00:07:09.039 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.039 15:49:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.039 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.039 15:49:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.039 15:49:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.039 15:49:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.039 15:49:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.039 15:49:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.039 15:49:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.039 15:49:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.039 15:49:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.039 15:49:20 -- accel/accel.sh@42 -- # jq -r . 00:07:09.039 [2024-11-29 15:49:20.146470] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.039 [2024-11-29 15:49:20.146577] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59385 ] 00:07:09.039 [2024-11-29 15:49:20.294953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.039 [2024-11-29 15:49:20.439681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.298 15:49:20 -- accel/accel.sh@21 -- # val= 00:07:09.298 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.298 15:49:20 -- accel/accel.sh@21 -- # val= 00:07:09.298 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.298 15:49:20 -- accel/accel.sh@21 -- # val= 00:07:09.298 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.298 15:49:20 -- accel/accel.sh@21 -- # val=0x1 00:07:09.298 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.298 15:49:20 -- accel/accel.sh@21 -- # val= 00:07:09.298 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.298 15:49:20 -- accel/accel.sh@21 -- # val= 00:07:09.298 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.298 15:49:20 -- accel/accel.sh@21 -- # val=decompress 00:07:09.298 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.298 15:49:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.298 15:49:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.298 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.298 15:49:20 -- accel/accel.sh@21 -- # val= 00:07:09.298 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.298 15:49:20 -- accel/accel.sh@21 -- # val=software 00:07:09.298 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.298 15:49:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.298 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.299 15:49:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.299 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.299 15:49:20 -- accel/accel.sh@21 -- # val=32 00:07:09.299 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.299 15:49:20 -- accel/accel.sh@21 -- # val=32 00:07:09.299 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.299 15:49:20 -- accel/accel.sh@21 -- # val=1 00:07:09.299 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.299 15:49:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.299 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.299 15:49:20 -- accel/accel.sh@21 -- # val=Yes 00:07:09.299 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.299 15:49:20 -- accel/accel.sh@21 -- # val= 00:07:09.299 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.299 15:49:20 -- accel/accel.sh@21 -- # val= 00:07:09.299 15:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.299 15:49:20 -- accel/accel.sh@20 -- # read -r var val 00:07:10.724 15:49:22 -- accel/accel.sh@21 -- # val= 00:07:10.725 15:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.725 15:49:22 -- accel/accel.sh@20 -- # IFS=: 00:07:10.725 15:49:22 -- accel/accel.sh@20 -- # read -r var val 00:07:10.725 15:49:22 -- accel/accel.sh@21 -- # val= 00:07:10.725 15:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.725 15:49:22 -- accel/accel.sh@20 -- # IFS=: 00:07:10.725 15:49:22 -- accel/accel.sh@20 -- # read -r var val 00:07:10.725 15:49:22 -- accel/accel.sh@21 -- # val= 00:07:10.725 15:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.725 15:49:22 -- accel/accel.sh@20 -- # IFS=: 00:07:10.725 15:49:22 -- accel/accel.sh@20 -- # read -r var val 00:07:10.725 15:49:22 -- accel/accel.sh@21 -- # val= 00:07:10.725 15:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.725 15:49:22 -- accel/accel.sh@20 -- # IFS=: 00:07:10.725 15:49:22 -- accel/accel.sh@20 -- # read -r var val 00:07:10.725 15:49:22 -- accel/accel.sh@21 -- # val= 00:07:10.725 15:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.725 15:49:22 -- accel/accel.sh@20 -- # IFS=: 00:07:10.725 15:49:22 -- accel/accel.sh@20 -- # read -r var val 00:07:10.725 15:49:22 -- accel/accel.sh@21 -- # val= 00:07:10.725 15:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.725 15:49:22 -- accel/accel.sh@20 -- # IFS=: 00:07:10.725 15:49:22 -- accel/accel.sh@20 -- # read -r var val 00:07:10.725 15:49:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.725 15:49:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:10.725 15:49:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.725 00:07:10.725 real 0m3.824s 00:07:10.725 user 0m3.396s 00:07:10.725 sys 0m0.224s 00:07:10.725 15:49:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.725 15:49:22 -- common/autotest_common.sh@10 -- # set +x 00:07:10.725 ************************************ 00:07:10.725 END TEST accel_decomp 00:07:10.725 ************************************ 00:07:10.725 15:49:22 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:10.725 15:49:22 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:10.725 15:49:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.725 15:49:22 -- common/autotest_common.sh@10 -- # set +x 00:07:10.725 ************************************ 00:07:10.725 START TEST accel_decmop_full 00:07:10.725 ************************************ 00:07:10.725 15:49:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:10.725 15:49:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.725 15:49:22 -- accel/accel.sh@17 -- # local accel_module 00:07:10.725 15:49:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:10.725 15:49:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:10.725 15:49:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.725 15:49:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.725 15:49:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.725 15:49:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.725 15:49:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.725 15:49:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.725 15:49:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.725 15:49:22 -- accel/accel.sh@42 -- # jq -r . 00:07:10.725 [2024-11-29 15:49:22.100592] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.725 [2024-11-29 15:49:22.100693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59420 ] 00:07:10.985 [2024-11-29 15:49:22.250025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.246 [2024-11-29 15:49:22.424417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.161 15:49:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:13.161 00:07:13.161 SPDK Configuration: 00:07:13.161 Core mask: 0x1 00:07:13.161 00:07:13.161 Accel Perf Configuration: 00:07:13.161 Workload Type: decompress 00:07:13.161 Transfer size: 111250 bytes 00:07:13.161 Vector count 1 00:07:13.161 Module: software 00:07:13.161 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.161 Queue depth: 32 00:07:13.161 Allocate depth: 32 00:07:13.161 # threads/core: 1 00:07:13.161 Run time: 1 seconds 00:07:13.161 Verify: Yes 00:07:13.161 00:07:13.161 Running for 1 seconds... 00:07:13.161 00:07:13.161 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.161 ------------------------------------------------------------------------------------ 00:07:13.161 0,0 4352/s 179 MiB/s 0 0 00:07:13.161 ==================================================================================== 00:07:13.161 Total 4352/s 461 MiB/s 0 0' 00:07:13.161 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.161 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.161 15:49:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:13.161 15:49:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:13.161 15:49:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.161 15:49:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.161 15:49:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.161 15:49:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.161 15:49:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.161 15:49:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.161 15:49:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.161 15:49:24 -- accel/accel.sh@42 -- # jq -r . 00:07:13.161 [2024-11-29 15:49:24.221893] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.161 [2024-11-29 15:49:24.222015] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59452 ] 00:07:13.161 [2024-11-29 15:49:24.372757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.161 [2024-11-29 15:49:24.549730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val= 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val= 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val= 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val=0x1 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val= 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val= 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val=decompress 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val= 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val=software 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val=32 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val=32 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val=1 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val=Yes 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val= 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.423 15:49:24 -- accel/accel.sh@21 -- # val= 00:07:13.423 15:49:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.423 15:49:24 -- accel/accel.sh@20 -- # read -r var val 00:07:15.347 15:49:26 -- accel/accel.sh@21 -- # val= 00:07:15.347 15:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.347 15:49:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.347 15:49:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.347 15:49:26 -- accel/accel.sh@21 -- # val= 00:07:15.347 15:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.347 15:49:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.347 15:49:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.347 15:49:26 -- accel/accel.sh@21 -- # val= 00:07:15.347 15:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.347 15:49:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.347 15:49:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.347 15:49:26 -- accel/accel.sh@21 -- # val= 00:07:15.347 15:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.347 15:49:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.347 15:49:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.347 15:49:26 -- accel/accel.sh@21 -- # val= 00:07:15.347 15:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.347 15:49:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.347 15:49:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.347 15:49:26 -- accel/accel.sh@21 -- # val= 00:07:15.347 15:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.347 15:49:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.347 15:49:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.347 15:49:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.347 15:49:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:15.347 15:49:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.347 00:07:15.347 real 0m4.251s 00:07:15.347 user 0m3.794s 00:07:15.347 sys 0m0.245s 00:07:15.347 ************************************ 00:07:15.347 END TEST accel_decmop_full 00:07:15.347 ************************************ 00:07:15.347 15:49:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.347 15:49:26 -- common/autotest_common.sh@10 -- # set +x 00:07:15.347 15:49:26 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:15.347 15:49:26 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:15.347 15:49:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.347 15:49:26 -- common/autotest_common.sh@10 -- # set +x 00:07:15.347 ************************************ 00:07:15.347 START TEST accel_decomp_mcore 00:07:15.347 ************************************ 00:07:15.347 15:49:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:15.347 15:49:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.347 15:49:26 -- accel/accel.sh@17 -- # local accel_module 00:07:15.347 15:49:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:15.347 15:49:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:15.347 15:49:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.347 15:49:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.347 15:49:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.347 15:49:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.347 15:49:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.347 15:49:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.347 15:49:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.347 15:49:26 -- accel/accel.sh@42 -- # jq -r . 00:07:15.347 [2024-11-29 15:49:26.411838] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.348 [2024-11-29 15:49:26.411945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59493 ] 00:07:15.348 [2024-11-29 15:49:26.562623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.348 [2024-11-29 15:49:26.741101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.348 [2024-11-29 15:49:26.741154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.348 [2024-11-29 15:49:26.741566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.348 [2024-11-29 15:49:26.741572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.246 15:49:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:17.246 00:07:17.246 SPDK Configuration: 00:07:17.246 Core mask: 0xf 00:07:17.246 00:07:17.246 Accel Perf Configuration: 00:07:17.246 Workload Type: decompress 00:07:17.246 Transfer size: 4096 bytes 00:07:17.246 Vector count 1 00:07:17.246 Module: software 00:07:17.246 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:17.246 Queue depth: 32 00:07:17.246 Allocate depth: 32 00:07:17.246 # threads/core: 1 00:07:17.246 Run time: 1 seconds 00:07:17.246 Verify: Yes 00:07:17.246 00:07:17.246 Running for 1 seconds... 00:07:17.246 00:07:17.246 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.246 ------------------------------------------------------------------------------------ 00:07:17.246 0,0 58400/s 107 MiB/s 0 0 00:07:17.246 3,0 58880/s 108 MiB/s 0 0 00:07:17.246 2,0 58528/s 107 MiB/s 0 0 00:07:17.246 1,0 71616/s 132 MiB/s 0 0 00:07:17.246 ==================================================================================== 00:07:17.246 Total 247424/s 966 MiB/s 0 0' 00:07:17.246 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.246 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.246 15:49:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:17.246 15:49:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:17.246 15:49:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.246 15:49:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.246 15:49:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.246 15:49:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.246 15:49:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.246 15:49:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.246 15:49:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.246 15:49:28 -- accel/accel.sh@42 -- # jq -r . 00:07:17.246 [2024-11-29 15:49:28.470736] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.247 [2024-11-29 15:49:28.470839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59522 ] 00:07:17.247 [2024-11-29 15:49:28.616248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.505 [2024-11-29 15:49:28.760269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.505 [2024-11-29 15:49:28.760556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.505 [2024-11-29 15:49:28.760734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.505 [2024-11-29 15:49:28.760753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val= 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val= 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val= 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val=0xf 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val= 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val= 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val=decompress 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val= 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val=software 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val=32 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val=32 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val=1 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val=Yes 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val= 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 15:49:28 -- accel/accel.sh@21 -- # val= 00:07:17.505 15:49:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 15:49:28 -- accel/accel.sh@20 -- # read -r var val 00:07:19.406 15:49:30 -- accel/accel.sh@21 -- # val= 00:07:19.406 15:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.407 15:49:30 -- accel/accel.sh@21 -- # val= 00:07:19.407 15:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.407 15:49:30 -- accel/accel.sh@21 -- # val= 00:07:19.407 15:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.407 15:49:30 -- accel/accel.sh@21 -- # val= 00:07:19.407 15:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.407 15:49:30 -- accel/accel.sh@21 -- # val= 00:07:19.407 15:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.407 15:49:30 -- accel/accel.sh@21 -- # val= 00:07:19.407 15:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.407 15:49:30 -- accel/accel.sh@21 -- # val= 00:07:19.407 15:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.407 15:49:30 -- accel/accel.sh@21 -- # val= 00:07:19.407 15:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.407 15:49:30 -- accel/accel.sh@21 -- # val= 00:07:19.407 15:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.407 15:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.407 15:49:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.407 15:49:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:19.407 15:49:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.407 00:07:19.407 real 0m3.990s 00:07:19.407 user 0m6.248s 00:07:19.407 sys 0m0.133s 00:07:19.407 ************************************ 00:07:19.407 END TEST accel_decomp_mcore 00:07:19.407 ************************************ 00:07:19.407 15:49:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.407 15:49:30 -- common/autotest_common.sh@10 -- # set +x 00:07:19.407 15:49:30 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:19.407 15:49:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:19.407 15:49:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.407 15:49:30 -- common/autotest_common.sh@10 -- # set +x 00:07:19.407 ************************************ 00:07:19.407 START TEST accel_decomp_full_mcore 00:07:19.407 ************************************ 00:07:19.407 15:49:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:19.407 15:49:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.407 15:49:30 -- accel/accel.sh@17 -- # local accel_module 00:07:19.407 15:49:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:19.407 15:49:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:19.407 15:49:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.407 15:49:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.407 15:49:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.407 15:49:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.407 15:49:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.407 15:49:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.407 15:49:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.407 15:49:30 -- accel/accel.sh@42 -- # jq -r . 00:07:19.407 [2024-11-29 15:49:30.454345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.407 [2024-11-29 15:49:30.454445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59566 ] 00:07:19.407 [2024-11-29 15:49:30.596620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.407 [2024-11-29 15:49:30.771031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.407 [2024-11-29 15:49:30.771031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.407 [2024-11-29 15:49:30.771219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.407 [2024-11-29 15:49:30.771229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.306 15:49:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:21.306 00:07:21.306 SPDK Configuration: 00:07:21.306 Core mask: 0xf 00:07:21.306 00:07:21.306 Accel Perf Configuration: 00:07:21.306 Workload Type: decompress 00:07:21.306 Transfer size: 111250 bytes 00:07:21.306 Vector count 1 00:07:21.306 Module: software 00:07:21.306 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:21.306 Queue depth: 32 00:07:21.306 Allocate depth: 32 00:07:21.306 # threads/core: 1 00:07:21.306 Run time: 1 seconds 00:07:21.306 Verify: Yes 00:07:21.306 00:07:21.306 Running for 1 seconds... 00:07:21.306 00:07:21.306 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.306 ------------------------------------------------------------------------------------ 00:07:21.306 0,0 4352/s 179 MiB/s 0 0 00:07:21.306 3,0 4320/s 178 MiB/s 0 0 00:07:21.306 2,0 4288/s 177 MiB/s 0 0 00:07:21.306 1,0 4352/s 179 MiB/s 0 0 00:07:21.306 ==================================================================================== 00:07:21.306 Total 17312/s 1836 MiB/s 0 0' 00:07:21.306 15:49:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.306 15:49:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.306 15:49:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.306 15:49:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.306 15:49:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.306 15:49:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.306 15:49:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.306 15:49:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.306 15:49:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.306 15:49:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.306 15:49:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.306 15:49:32 -- accel/accel.sh@42 -- # jq -r . 00:07:21.306 [2024-11-29 15:49:32.619173] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.306 [2024-11-29 15:49:32.619381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59595 ] 00:07:21.563 [2024-11-29 15:49:32.766919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.563 [2024-11-29 15:49:32.940441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.563 [2024-11-29 15:49:32.940625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.563 [2024-11-29 15:49:32.940879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.563 [2024-11-29 15:49:32.940887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val= 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val= 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val= 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val=0xf 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val= 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val= 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val=decompress 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val= 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val=software 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val=32 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val=32 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val=1 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val=Yes 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val= 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.822 15:49:33 -- accel/accel.sh@21 -- # val= 00:07:21.822 15:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.822 15:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:23.260 15:49:34 -- accel/accel.sh@21 -- # val= 00:07:23.260 15:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.260 15:49:34 -- accel/accel.sh@21 -- # val= 00:07:23.260 15:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.260 15:49:34 -- accel/accel.sh@21 -- # val= 00:07:23.260 15:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.260 15:49:34 -- accel/accel.sh@21 -- # val= 00:07:23.260 15:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.260 15:49:34 -- accel/accel.sh@21 -- # val= 00:07:23.260 15:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.260 15:49:34 -- accel/accel.sh@21 -- # val= 00:07:23.260 15:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.260 15:49:34 -- accel/accel.sh@21 -- # val= 00:07:23.260 15:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.260 15:49:34 -- accel/accel.sh@21 -- # val= 00:07:23.260 15:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.260 15:49:34 -- accel/accel.sh@21 -- # val= 00:07:23.260 15:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.260 15:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.260 15:49:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.260 15:49:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:23.260 15:49:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.260 00:07:23.260 real 0m4.181s 00:07:23.260 user 0m12.617s 00:07:23.260 sys 0m0.276s 00:07:23.260 15:49:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.260 15:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:23.260 ************************************ 00:07:23.260 END TEST accel_decomp_full_mcore 00:07:23.260 ************************************ 00:07:23.260 15:49:34 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:23.260 15:49:34 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:23.260 15:49:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.260 15:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:23.260 ************************************ 00:07:23.260 START TEST accel_decomp_mthread 00:07:23.260 ************************************ 00:07:23.260 15:49:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:23.260 15:49:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.260 15:49:34 -- accel/accel.sh@17 -- # local accel_module 00:07:23.260 15:49:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:23.260 15:49:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:23.260 15:49:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.260 15:49:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.260 15:49:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.260 15:49:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.260 15:49:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.260 15:49:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.260 15:49:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.260 15:49:34 -- accel/accel.sh@42 -- # jq -r . 00:07:23.260 [2024-11-29 15:49:34.670129] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.260 [2024-11-29 15:49:34.670207] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59639 ] 00:07:23.533 [2024-11-29 15:49:34.810955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.533 [2024-11-29 15:49:34.961171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.433 15:49:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:25.433 00:07:25.433 SPDK Configuration: 00:07:25.433 Core mask: 0x1 00:07:25.433 00:07:25.433 Accel Perf Configuration: 00:07:25.433 Workload Type: decompress 00:07:25.433 Transfer size: 4096 bytes 00:07:25.433 Vector count 1 00:07:25.433 Module: software 00:07:25.433 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.433 Queue depth: 32 00:07:25.433 Allocate depth: 32 00:07:25.433 # threads/core: 2 00:07:25.433 Run time: 1 seconds 00:07:25.433 Verify: Yes 00:07:25.433 00:07:25.433 Running for 1 seconds... 00:07:25.433 00:07:25.433 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.433 ------------------------------------------------------------------------------------ 00:07:25.433 0,1 40832/s 75 MiB/s 0 0 00:07:25.433 0,0 40704/s 75 MiB/s 0 0 00:07:25.433 ==================================================================================== 00:07:25.433 Total 81536/s 318 MiB/s 0 0' 00:07:25.433 15:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:25.433 15:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:25.433 15:49:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.433 15:49:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.433 15:49:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.433 15:49:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.433 15:49:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.433 15:49:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.433 15:49:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.433 15:49:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.433 15:49:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.433 15:49:36 -- accel/accel.sh@42 -- # jq -r . 00:07:25.433 [2024-11-29 15:49:36.601803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.433 [2024-11-29 15:49:36.601886] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59665 ] 00:07:25.433 [2024-11-29 15:49:36.743534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.691 [2024-11-29 15:49:36.882533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.691 15:49:36 -- accel/accel.sh@21 -- # val= 00:07:25.691 15:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:36 -- accel/accel.sh@21 -- # val= 00:07:25.691 15:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:36 -- accel/accel.sh@21 -- # val= 00:07:25.691 15:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:36 -- accel/accel.sh@21 -- # val=0x1 00:07:25.691 15:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:36 -- accel/accel.sh@21 -- # val= 00:07:25.691 15:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:36 -- accel/accel.sh@21 -- # val= 00:07:25.691 15:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:36 -- accel/accel.sh@21 -- # val=decompress 00:07:25.691 15:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:25.691 15:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.691 15:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:37 -- accel/accel.sh@21 -- # val= 00:07:25.691 15:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:37 -- accel/accel.sh@21 -- # val=software 00:07:25.691 15:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:37 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.691 15:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:37 -- accel/accel.sh@21 -- # val=32 00:07:25.691 15:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:37 -- accel/accel.sh@21 -- # val=32 00:07:25.691 15:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:37 -- accel/accel.sh@21 -- # val=2 00:07:25.691 15:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.691 15:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:37 -- accel/accel.sh@21 -- # val=Yes 00:07:25.691 15:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:37 -- accel/accel.sh@21 -- # val= 00:07:25.691 15:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:25.691 15:49:37 -- accel/accel.sh@21 -- # val= 00:07:25.691 15:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:25.691 15:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:27.069 15:49:38 -- accel/accel.sh@21 -- # val= 00:07:27.069 15:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.069 15:49:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.069 15:49:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.069 15:49:38 -- accel/accel.sh@21 -- # val= 00:07:27.069 15:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.069 15:49:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.070 15:49:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.070 15:49:38 -- accel/accel.sh@21 -- # val= 00:07:27.070 15:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.070 15:49:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.070 15:49:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.070 15:49:38 -- accel/accel.sh@21 -- # val= 00:07:27.070 15:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.070 15:49:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.070 15:49:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.070 15:49:38 -- accel/accel.sh@21 -- # val= 00:07:27.070 15:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.070 15:49:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.070 15:49:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.070 15:49:38 -- accel/accel.sh@21 -- # val= 00:07:27.070 15:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.070 15:49:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.070 15:49:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.070 15:49:38 -- accel/accel.sh@21 -- # val= 00:07:27.070 15:49:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.070 15:49:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.070 15:49:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.070 ************************************ 00:07:27.070 END TEST accel_decomp_mthread 00:07:27.070 15:49:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.070 15:49:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:27.070 15:49:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.070 00:07:27.070 real 0m3.844s 00:07:27.070 user 0m3.418s 00:07:27.070 sys 0m0.220s 00:07:27.070 15:49:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.070 15:49:38 -- common/autotest_common.sh@10 -- # set +x 00:07:27.070 ************************************ 00:07:27.329 15:49:38 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.329 15:49:38 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:27.329 15:49:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.329 15:49:38 -- common/autotest_common.sh@10 -- # set +x 00:07:27.329 ************************************ 00:07:27.329 START TEST accel_deomp_full_mthread 00:07:27.329 ************************************ 00:07:27.329 15:49:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.329 15:49:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.329 15:49:38 -- accel/accel.sh@17 -- # local accel_module 00:07:27.329 15:49:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.329 15:49:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.329 15:49:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.329 15:49:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.329 15:49:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.329 15:49:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.329 15:49:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.329 15:49:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.329 15:49:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.329 15:49:38 -- accel/accel.sh@42 -- # jq -r . 00:07:27.329 [2024-11-29 15:49:38.572129] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.329 [2024-11-29 15:49:38.572242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59706 ] 00:07:27.329 [2024-11-29 15:49:38.723228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.588 [2024-11-29 15:49:38.875311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.490 15:49:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:29.490 00:07:29.490 SPDK Configuration: 00:07:29.490 Core mask: 0x1 00:07:29.490 00:07:29.490 Accel Perf Configuration: 00:07:29.490 Workload Type: decompress 00:07:29.490 Transfer size: 111250 bytes 00:07:29.490 Vector count 1 00:07:29.490 Module: software 00:07:29.490 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.490 Queue depth: 32 00:07:29.490 Allocate depth: 32 00:07:29.490 # threads/core: 2 00:07:29.490 Run time: 1 seconds 00:07:29.490 Verify: Yes 00:07:29.490 00:07:29.490 Running for 1 seconds... 00:07:29.490 00:07:29.490 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.490 ------------------------------------------------------------------------------------ 00:07:29.490 0,1 2816/s 116 MiB/s 0 0 00:07:29.490 0,0 2752/s 113 MiB/s 0 0 00:07:29.490 ==================================================================================== 00:07:29.490 Total 5568/s 590 MiB/s 0 0' 00:07:29.490 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.490 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.490 15:49:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.490 15:49:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.490 15:49:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.490 15:49:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.490 15:49:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.490 15:49:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.490 15:49:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.490 15:49:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.490 15:49:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.490 15:49:40 -- accel/accel.sh@42 -- # jq -r . 00:07:29.490 [2024-11-29 15:49:40.535935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.490 [2024-11-29 15:49:40.536072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59735 ] 00:07:29.490 [2024-11-29 15:49:40.673015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.490 [2024-11-29 15:49:40.812046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val= 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val= 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val= 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val=0x1 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val= 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val= 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val=decompress 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val= 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val=software 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val=32 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val=32 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val=2 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val=Yes 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val= 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.749 15:49:40 -- accel/accel.sh@21 -- # val= 00:07:29.749 15:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.749 15:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:31.125 15:49:42 -- accel/accel.sh@21 -- # val= 00:07:31.125 15:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.125 15:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.125 15:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.125 15:49:42 -- accel/accel.sh@21 -- # val= 00:07:31.125 15:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.125 15:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.125 15:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.125 15:49:42 -- accel/accel.sh@21 -- # val= 00:07:31.125 15:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.125 15:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.125 15:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.125 15:49:42 -- accel/accel.sh@21 -- # val= 00:07:31.125 15:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.125 15:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.125 15:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.125 15:49:42 -- accel/accel.sh@21 -- # val= 00:07:31.125 15:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.125 15:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.125 15:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.125 15:49:42 -- accel/accel.sh@21 -- # val= 00:07:31.125 15:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.125 15:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.125 15:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.125 15:49:42 -- accel/accel.sh@21 -- # val= 00:07:31.125 15:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.125 15:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.125 15:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.125 15:49:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.125 ************************************ 00:07:31.125 END TEST accel_deomp_full_mthread 00:07:31.125 ************************************ 00:07:31.125 15:49:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:31.125 15:49:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.125 00:07:31.125 real 0m3.890s 00:07:31.125 user 0m3.469s 00:07:31.125 sys 0m0.218s 00:07:31.125 15:49:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.125 15:49:42 -- common/autotest_common.sh@10 -- # set +x 00:07:31.125 15:49:42 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:31.125 15:49:42 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:31.125 15:49:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:31.125 15:49:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.125 15:49:42 -- common/autotest_common.sh@10 -- # set +x 00:07:31.125 15:49:42 -- accel/accel.sh@129 -- # build_accel_config 00:07:31.125 15:49:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.125 15:49:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.125 15:49:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.125 15:49:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.125 15:49:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.125 15:49:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.125 15:49:42 -- accel/accel.sh@42 -- # jq -r . 00:07:31.125 ************************************ 00:07:31.125 START TEST accel_dif_functional_tests 00:07:31.125 ************************************ 00:07:31.125 15:49:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:31.125 [2024-11-29 15:49:42.517939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:31.125 [2024-11-29 15:49:42.518051] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59777 ] 00:07:31.383 [2024-11-29 15:49:42.650282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.383 [2024-11-29 15:49:42.790149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.383 [2024-11-29 15:49:42.790347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.383 [2024-11-29 15:49:42.790373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.641 00:07:31.641 00:07:31.641 CUnit - A unit testing framework for C - Version 2.1-3 00:07:31.641 http://cunit.sourceforge.net/ 00:07:31.641 00:07:31.641 00:07:31.641 Suite: accel_dif 00:07:31.641 Test: verify: DIF generated, GUARD check ...passed 00:07:31.641 Test: verify: DIF generated, APPTAG check ...passed 00:07:31.641 Test: verify: DIF generated, REFTAG check ...passed 00:07:31.641 Test: verify: DIF not generated, GUARD check ...[2024-11-29 15:49:43.034751] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:31.641 [2024-11-29 15:49:43.035384] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:31.641 passed 00:07:31.641 Test: verify: DIF not generated, APPTAG check ...[2024-11-29 15:49:43.035639] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:31.641 [2024-11-29 15:49:43.036007] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:31.641 passed 00:07:31.641 Test: verify: DIF not generated, REFTAG check ...[2024-11-29 15:49:43.036186] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:31.641 [2024-11-29 15:49:43.036599] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:31.641 passed 00:07:31.641 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:31.641 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-29 15:49:43.036943] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:31.641 passed 00:07:31.641 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:31.641 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:31.641 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:31.641 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:31.641 Test: generate copy: DIF generated, GUARD check ...[2024-11-29 15:49:43.037755] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:31.641 passed 00:07:31.641 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:31.641 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:31.641 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:31.641 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:31.641 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:31.641 Test: generate copy: iovecs-len validate ...[2024-11-29 15:49:43.038417] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:31.641 passed 00:07:31.641 Test: generate copy: buffer alignment validate ...passed 00:07:31.641 00:07:31.641 Run Summary: Type Total Ran Passed Failed Inactive 00:07:31.641 suites 1 1 n/a 0 0 00:07:31.641 tests 20 20 20 0 0 00:07:31.641 asserts 204 204 204 0 n/a 00:07:31.641 00:07:31.641 Elapsed time = 0.011 seconds 00:07:32.573 00:07:32.573 real 0m1.180s 00:07:32.573 user 0m2.211s 00:07:32.573 sys 0m0.148s 00:07:32.573 15:49:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.573 15:49:43 -- common/autotest_common.sh@10 -- # set +x 00:07:32.573 ************************************ 00:07:32.573 END TEST accel_dif_functional_tests 00:07:32.573 ************************************ 00:07:32.573 00:07:32.573 real 1m24.932s 00:07:32.573 user 1m33.416s 00:07:32.573 sys 0m6.187s 00:07:32.573 15:49:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.573 15:49:43 -- common/autotest_common.sh@10 -- # set +x 00:07:32.573 ************************************ 00:07:32.573 END TEST accel 00:07:32.573 ************************************ 00:07:32.573 15:49:43 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:32.573 15:49:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:32.573 15:49:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.573 15:49:43 -- common/autotest_common.sh@10 -- # set +x 00:07:32.573 ************************************ 00:07:32.573 START TEST accel_rpc 00:07:32.573 ************************************ 00:07:32.573 15:49:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:32.573 * Looking for test storage... 00:07:32.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:32.573 15:49:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:32.573 15:49:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:32.573 15:49:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:32.573 15:49:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:32.573 15:49:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:32.573 15:49:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:32.573 15:49:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:32.573 15:49:43 -- scripts/common.sh@335 -- # IFS=.-: 00:07:32.573 15:49:43 -- scripts/common.sh@335 -- # read -ra ver1 00:07:32.573 15:49:43 -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.573 15:49:43 -- scripts/common.sh@336 -- # read -ra ver2 00:07:32.573 15:49:43 -- scripts/common.sh@337 -- # local 'op=<' 00:07:32.573 15:49:43 -- scripts/common.sh@339 -- # ver1_l=2 00:07:32.573 15:49:43 -- scripts/common.sh@340 -- # ver2_l=1 00:07:32.573 15:49:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:32.573 15:49:43 -- scripts/common.sh@343 -- # case "$op" in 00:07:32.573 15:49:43 -- scripts/common.sh@344 -- # : 1 00:07:32.573 15:49:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:32.573 15:49:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.573 15:49:43 -- scripts/common.sh@364 -- # decimal 1 00:07:32.573 15:49:43 -- scripts/common.sh@352 -- # local d=1 00:07:32.573 15:49:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.573 15:49:43 -- scripts/common.sh@354 -- # echo 1 00:07:32.573 15:49:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:32.573 15:49:43 -- scripts/common.sh@365 -- # decimal 2 00:07:32.573 15:49:43 -- scripts/common.sh@352 -- # local d=2 00:07:32.573 15:49:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.573 15:49:43 -- scripts/common.sh@354 -- # echo 2 00:07:32.573 15:49:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:32.573 15:49:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:32.573 15:49:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:32.573 15:49:43 -- scripts/common.sh@367 -- # return 0 00:07:32.573 15:49:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.573 15:49:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:32.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.573 --rc genhtml_branch_coverage=1 00:07:32.573 --rc genhtml_function_coverage=1 00:07:32.573 --rc genhtml_legend=1 00:07:32.573 --rc geninfo_all_blocks=1 00:07:32.573 --rc geninfo_unexecuted_blocks=1 00:07:32.573 00:07:32.573 ' 00:07:32.573 15:49:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:32.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.573 --rc genhtml_branch_coverage=1 00:07:32.573 --rc genhtml_function_coverage=1 00:07:32.573 --rc genhtml_legend=1 00:07:32.573 --rc geninfo_all_blocks=1 00:07:32.573 --rc geninfo_unexecuted_blocks=1 00:07:32.573 00:07:32.573 ' 00:07:32.573 15:49:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:32.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.573 --rc genhtml_branch_coverage=1 00:07:32.573 --rc genhtml_function_coverage=1 00:07:32.573 --rc genhtml_legend=1 00:07:32.573 --rc geninfo_all_blocks=1 00:07:32.573 --rc geninfo_unexecuted_blocks=1 00:07:32.573 00:07:32.573 ' 00:07:32.573 15:49:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:32.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.573 --rc genhtml_branch_coverage=1 00:07:32.573 --rc genhtml_function_coverage=1 00:07:32.573 --rc genhtml_legend=1 00:07:32.573 --rc geninfo_all_blocks=1 00:07:32.573 --rc geninfo_unexecuted_blocks=1 00:07:32.573 00:07:32.573 ' 00:07:32.573 15:49:43 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:32.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.573 15:49:43 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59855 00:07:32.573 15:49:43 -- accel/accel_rpc.sh@15 -- # waitforlisten 59855 00:07:32.573 15:49:43 -- common/autotest_common.sh@829 -- # '[' -z 59855 ']' 00:07:32.573 15:49:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.573 15:49:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.573 15:49:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.573 15:49:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.573 15:49:43 -- common/autotest_common.sh@10 -- # set +x 00:07:32.573 15:49:43 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:32.573 [2024-11-29 15:49:43.957919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.573 [2024-11-29 15:49:43.958046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59855 ] 00:07:32.832 [2024-11-29 15:49:44.104220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.832 [2024-11-29 15:49:44.242289] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:32.832 [2024-11-29 15:49:44.242441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.397 15:49:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.397 15:49:44 -- common/autotest_common.sh@862 -- # return 0 00:07:33.397 15:49:44 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:33.397 15:49:44 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:33.397 15:49:44 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:33.397 15:49:44 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:33.397 15:49:44 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:33.397 15:49:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:33.397 15:49:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.397 15:49:44 -- common/autotest_common.sh@10 -- # set +x 00:07:33.397 ************************************ 00:07:33.397 START TEST accel_assign_opcode 00:07:33.397 ************************************ 00:07:33.397 15:49:44 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:33.397 15:49:44 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:33.397 15:49:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.397 15:49:44 -- common/autotest_common.sh@10 -- # set +x 00:07:33.397 [2024-11-29 15:49:44.774955] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:33.397 15:49:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.397 15:49:44 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:33.397 15:49:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.397 15:49:44 -- common/autotest_common.sh@10 -- # set +x 00:07:33.397 [2024-11-29 15:49:44.782922] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:33.397 15:49:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.397 15:49:44 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:33.397 15:49:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.397 15:49:44 -- common/autotest_common.sh@10 -- # set +x 00:07:33.962 15:49:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.962 15:49:45 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:33.962 15:49:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.962 15:49:45 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:33.962 15:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:33.962 15:49:45 -- accel/accel_rpc.sh@42 -- # grep software 00:07:33.962 15:49:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.962 software 00:07:33.962 00:07:33.962 real 0m0.471s 00:07:33.962 user 0m0.033s 00:07:33.962 sys 0m0.013s 00:07:33.962 15:49:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.962 ************************************ 00:07:33.962 END TEST accel_assign_opcode 00:07:33.962 ************************************ 00:07:33.962 15:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:33.962 15:49:45 -- accel/accel_rpc.sh@55 -- # killprocess 59855 00:07:33.962 15:49:45 -- common/autotest_common.sh@936 -- # '[' -z 59855 ']' 00:07:33.962 15:49:45 -- common/autotest_common.sh@940 -- # kill -0 59855 00:07:33.962 15:49:45 -- common/autotest_common.sh@941 -- # uname 00:07:33.962 15:49:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:33.962 15:49:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59855 00:07:33.962 15:49:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:33.962 15:49:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:33.962 killing process with pid 59855 00:07:33.962 15:49:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59855' 00:07:33.962 15:49:45 -- common/autotest_common.sh@955 -- # kill 59855 00:07:33.962 15:49:45 -- common/autotest_common.sh@960 -- # wait 59855 00:07:35.339 00:07:35.339 real 0m2.724s 00:07:35.339 user 0m2.704s 00:07:35.339 sys 0m0.382s 00:07:35.339 15:49:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.339 ************************************ 00:07:35.339 END TEST accel_rpc 00:07:35.339 ************************************ 00:07:35.339 15:49:46 -- common/autotest_common.sh@10 -- # set +x 00:07:35.339 15:49:46 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:35.339 15:49:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.339 15:49:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.339 15:49:46 -- common/autotest_common.sh@10 -- # set +x 00:07:35.339 ************************************ 00:07:35.339 START TEST app_cmdline 00:07:35.339 ************************************ 00:07:35.339 15:49:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:35.339 * Looking for test storage... 00:07:35.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:35.339 15:49:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:35.339 15:49:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:35.339 15:49:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:35.339 15:49:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:35.339 15:49:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:35.339 15:49:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:35.339 15:49:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:35.339 15:49:46 -- scripts/common.sh@335 -- # IFS=.-: 00:07:35.339 15:49:46 -- scripts/common.sh@335 -- # read -ra ver1 00:07:35.339 15:49:46 -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.339 15:49:46 -- scripts/common.sh@336 -- # read -ra ver2 00:07:35.339 15:49:46 -- scripts/common.sh@337 -- # local 'op=<' 00:07:35.339 15:49:46 -- scripts/common.sh@339 -- # ver1_l=2 00:07:35.339 15:49:46 -- scripts/common.sh@340 -- # ver2_l=1 00:07:35.339 15:49:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:35.339 15:49:46 -- scripts/common.sh@343 -- # case "$op" in 00:07:35.339 15:49:46 -- scripts/common.sh@344 -- # : 1 00:07:35.339 15:49:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:35.339 15:49:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.339 15:49:46 -- scripts/common.sh@364 -- # decimal 1 00:07:35.339 15:49:46 -- scripts/common.sh@352 -- # local d=1 00:07:35.339 15:49:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.339 15:49:46 -- scripts/common.sh@354 -- # echo 1 00:07:35.339 15:49:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:35.339 15:49:46 -- scripts/common.sh@365 -- # decimal 2 00:07:35.339 15:49:46 -- scripts/common.sh@352 -- # local d=2 00:07:35.339 15:49:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.339 15:49:46 -- scripts/common.sh@354 -- # echo 2 00:07:35.339 15:49:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:35.339 15:49:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:35.339 15:49:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:35.339 15:49:46 -- scripts/common.sh@367 -- # return 0 00:07:35.339 15:49:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.339 15:49:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.339 --rc genhtml_branch_coverage=1 00:07:35.339 --rc genhtml_function_coverage=1 00:07:35.339 --rc genhtml_legend=1 00:07:35.339 --rc geninfo_all_blocks=1 00:07:35.339 --rc geninfo_unexecuted_blocks=1 00:07:35.339 00:07:35.339 ' 00:07:35.339 15:49:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.339 --rc genhtml_branch_coverage=1 00:07:35.339 --rc genhtml_function_coverage=1 00:07:35.339 --rc genhtml_legend=1 00:07:35.339 --rc geninfo_all_blocks=1 00:07:35.339 --rc geninfo_unexecuted_blocks=1 00:07:35.339 00:07:35.339 ' 00:07:35.339 15:49:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.339 --rc genhtml_branch_coverage=1 00:07:35.339 --rc genhtml_function_coverage=1 00:07:35.339 --rc genhtml_legend=1 00:07:35.339 --rc geninfo_all_blocks=1 00:07:35.339 --rc geninfo_unexecuted_blocks=1 00:07:35.339 00:07:35.339 ' 00:07:35.339 15:49:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.339 --rc genhtml_branch_coverage=1 00:07:35.339 --rc genhtml_function_coverage=1 00:07:35.339 --rc genhtml_legend=1 00:07:35.339 --rc geninfo_all_blocks=1 00:07:35.339 --rc geninfo_unexecuted_blocks=1 00:07:35.339 00:07:35.339 ' 00:07:35.339 15:49:46 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:35.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.339 15:49:46 -- app/cmdline.sh@17 -- # spdk_tgt_pid=59967 00:07:35.339 15:49:46 -- app/cmdline.sh@18 -- # waitforlisten 59967 00:07:35.339 15:49:46 -- common/autotest_common.sh@829 -- # '[' -z 59967 ']' 00:07:35.339 15:49:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.339 15:49:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.339 15:49:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.339 15:49:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.339 15:49:46 -- common/autotest_common.sh@10 -- # set +x 00:07:35.339 15:49:46 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:35.339 [2024-11-29 15:49:46.721563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.340 [2024-11-29 15:49:46.722052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59967 ] 00:07:35.599 [2024-11-29 15:49:46.870167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.599 [2024-11-29 15:49:47.006239] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:35.599 [2024-11-29 15:49:47.006392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.166 15:49:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.166 15:49:47 -- common/autotest_common.sh@862 -- # return 0 00:07:36.166 15:49:47 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:36.424 { 00:07:36.424 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:36.424 "fields": { 00:07:36.424 "major": 24, 00:07:36.424 "minor": 1, 00:07:36.424 "patch": 1, 00:07:36.424 "suffix": "-pre", 00:07:36.424 "commit": "c13c99a5e" 00:07:36.424 } 00:07:36.424 } 00:07:36.424 15:49:47 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:36.424 15:49:47 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:36.424 15:49:47 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:36.424 15:49:47 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:36.424 15:49:47 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:36.424 15:49:47 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:36.424 15:49:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.424 15:49:47 -- app/cmdline.sh@26 -- # sort 00:07:36.424 15:49:47 -- common/autotest_common.sh@10 -- # set +x 00:07:36.424 15:49:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.424 15:49:47 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:36.425 15:49:47 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:36.425 15:49:47 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:36.425 15:49:47 -- common/autotest_common.sh@650 -- # local es=0 00:07:36.425 15:49:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:36.425 15:49:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.425 15:49:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.425 15:49:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.425 15:49:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.425 15:49:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.425 15:49:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.425 15:49:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.425 15:49:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:36.425 15:49:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:36.683 request: 00:07:36.683 { 00:07:36.683 "method": "env_dpdk_get_mem_stats", 00:07:36.683 "req_id": 1 00:07:36.683 } 00:07:36.683 Got JSON-RPC error response 00:07:36.683 response: 00:07:36.683 { 00:07:36.683 "code": -32601, 00:07:36.683 "message": "Method not found" 00:07:36.683 } 00:07:36.683 15:49:47 -- common/autotest_common.sh@653 -- # es=1 00:07:36.683 15:49:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.683 15:49:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:36.683 15:49:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.683 15:49:47 -- app/cmdline.sh@1 -- # killprocess 59967 00:07:36.683 15:49:47 -- common/autotest_common.sh@936 -- # '[' -z 59967 ']' 00:07:36.683 15:49:47 -- common/autotest_common.sh@940 -- # kill -0 59967 00:07:36.683 15:49:47 -- common/autotest_common.sh@941 -- # uname 00:07:36.683 15:49:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:36.683 15:49:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59967 00:07:36.683 15:49:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:36.683 15:49:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:36.683 killing process with pid 59967 00:07:36.683 15:49:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59967' 00:07:36.683 15:49:47 -- common/autotest_common.sh@955 -- # kill 59967 00:07:36.683 15:49:47 -- common/autotest_common.sh@960 -- # wait 59967 00:07:38.062 00:07:38.062 real 0m2.600s 00:07:38.062 user 0m2.878s 00:07:38.062 sys 0m0.390s 00:07:38.062 15:49:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.062 15:49:49 -- common/autotest_common.sh@10 -- # set +x 00:07:38.062 ************************************ 00:07:38.062 END TEST app_cmdline 00:07:38.062 ************************************ 00:07:38.062 15:49:49 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:38.062 15:49:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.062 15:49:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.062 15:49:49 -- common/autotest_common.sh@10 -- # set +x 00:07:38.062 ************************************ 00:07:38.062 START TEST version 00:07:38.062 ************************************ 00:07:38.062 15:49:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:38.062 * Looking for test storage... 00:07:38.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:38.062 15:49:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:38.062 15:49:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:38.062 15:49:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:38.062 15:49:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:38.062 15:49:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:38.062 15:49:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:38.062 15:49:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:38.062 15:49:49 -- scripts/common.sh@335 -- # IFS=.-: 00:07:38.062 15:49:49 -- scripts/common.sh@335 -- # read -ra ver1 00:07:38.062 15:49:49 -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.062 15:49:49 -- scripts/common.sh@336 -- # read -ra ver2 00:07:38.062 15:49:49 -- scripts/common.sh@337 -- # local 'op=<' 00:07:38.062 15:49:49 -- scripts/common.sh@339 -- # ver1_l=2 00:07:38.062 15:49:49 -- scripts/common.sh@340 -- # ver2_l=1 00:07:38.062 15:49:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:38.062 15:49:49 -- scripts/common.sh@343 -- # case "$op" in 00:07:38.062 15:49:49 -- scripts/common.sh@344 -- # : 1 00:07:38.062 15:49:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:38.062 15:49:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.062 15:49:49 -- scripts/common.sh@364 -- # decimal 1 00:07:38.062 15:49:49 -- scripts/common.sh@352 -- # local d=1 00:07:38.062 15:49:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.062 15:49:49 -- scripts/common.sh@354 -- # echo 1 00:07:38.062 15:49:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:38.062 15:49:49 -- scripts/common.sh@365 -- # decimal 2 00:07:38.062 15:49:49 -- scripts/common.sh@352 -- # local d=2 00:07:38.062 15:49:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.062 15:49:49 -- scripts/common.sh@354 -- # echo 2 00:07:38.062 15:49:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:38.062 15:49:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:38.062 15:49:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:38.062 15:49:49 -- scripts/common.sh@367 -- # return 0 00:07:38.062 15:49:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.062 15:49:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:38.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.062 --rc genhtml_branch_coverage=1 00:07:38.062 --rc genhtml_function_coverage=1 00:07:38.062 --rc genhtml_legend=1 00:07:38.062 --rc geninfo_all_blocks=1 00:07:38.062 --rc geninfo_unexecuted_blocks=1 00:07:38.062 00:07:38.062 ' 00:07:38.062 15:49:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:38.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.062 --rc genhtml_branch_coverage=1 00:07:38.062 --rc genhtml_function_coverage=1 00:07:38.062 --rc genhtml_legend=1 00:07:38.062 --rc geninfo_all_blocks=1 00:07:38.062 --rc geninfo_unexecuted_blocks=1 00:07:38.062 00:07:38.062 ' 00:07:38.062 15:49:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:38.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.062 --rc genhtml_branch_coverage=1 00:07:38.062 --rc genhtml_function_coverage=1 00:07:38.062 --rc genhtml_legend=1 00:07:38.062 --rc geninfo_all_blocks=1 00:07:38.062 --rc geninfo_unexecuted_blocks=1 00:07:38.062 00:07:38.062 ' 00:07:38.062 15:49:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:38.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.062 --rc genhtml_branch_coverage=1 00:07:38.062 --rc genhtml_function_coverage=1 00:07:38.062 --rc genhtml_legend=1 00:07:38.062 --rc geninfo_all_blocks=1 00:07:38.062 --rc geninfo_unexecuted_blocks=1 00:07:38.062 00:07:38.062 ' 00:07:38.062 15:49:49 -- app/version.sh@17 -- # get_header_version major 00:07:38.062 15:49:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:38.062 15:49:49 -- app/version.sh@14 -- # cut -f2 00:07:38.062 15:49:49 -- app/version.sh@14 -- # tr -d '"' 00:07:38.062 15:49:49 -- app/version.sh@17 -- # major=24 00:07:38.062 15:49:49 -- app/version.sh@18 -- # get_header_version minor 00:07:38.062 15:49:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:38.062 15:49:49 -- app/version.sh@14 -- # cut -f2 00:07:38.062 15:49:49 -- app/version.sh@14 -- # tr -d '"' 00:07:38.062 15:49:49 -- app/version.sh@18 -- # minor=1 00:07:38.062 15:49:49 -- app/version.sh@19 -- # get_header_version patch 00:07:38.062 15:49:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:38.062 15:49:49 -- app/version.sh@14 -- # cut -f2 00:07:38.062 15:49:49 -- app/version.sh@14 -- # tr -d '"' 00:07:38.062 15:49:49 -- app/version.sh@19 -- # patch=1 00:07:38.062 15:49:49 -- app/version.sh@20 -- # get_header_version suffix 00:07:38.062 15:49:49 -- app/version.sh@14 -- # cut -f2 00:07:38.062 15:49:49 -- app/version.sh@14 -- # tr -d '"' 00:07:38.062 15:49:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:38.062 15:49:49 -- app/version.sh@20 -- # suffix=-pre 00:07:38.062 15:49:49 -- app/version.sh@22 -- # version=24.1 00:07:38.062 15:49:49 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:38.062 15:49:49 -- app/version.sh@25 -- # version=24.1.1 00:07:38.062 15:49:49 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:38.062 15:49:49 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:38.062 15:49:49 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:38.062 15:49:49 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:38.062 15:49:49 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:38.062 00:07:38.062 real 0m0.194s 00:07:38.062 user 0m0.128s 00:07:38.062 sys 0m0.096s 00:07:38.062 15:49:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.062 ************************************ 00:07:38.062 END TEST version 00:07:38.062 ************************************ 00:07:38.062 15:49:49 -- common/autotest_common.sh@10 -- # set +x 00:07:38.062 15:49:49 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:38.062 15:49:49 -- spdk/autotest.sh@191 -- # uname -s 00:07:38.062 15:49:49 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:38.062 15:49:49 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:38.062 15:49:49 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:38.062 15:49:49 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:07:38.062 15:49:49 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:38.062 15:49:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:38.062 15:49:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.062 15:49:49 -- common/autotest_common.sh@10 -- # set +x 00:07:38.063 ************************************ 00:07:38.063 START TEST blockdev_nvme 00:07:38.063 ************************************ 00:07:38.063 15:49:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:38.063 * Looking for test storage... 00:07:38.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:38.063 15:49:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:38.063 15:49:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:38.063 15:49:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:38.323 15:49:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:38.323 15:49:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:38.323 15:49:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:38.323 15:49:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:38.323 15:49:49 -- scripts/common.sh@335 -- # IFS=.-: 00:07:38.323 15:49:49 -- scripts/common.sh@335 -- # read -ra ver1 00:07:38.323 15:49:49 -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.323 15:49:49 -- scripts/common.sh@336 -- # read -ra ver2 00:07:38.323 15:49:49 -- scripts/common.sh@337 -- # local 'op=<' 00:07:38.323 15:49:49 -- scripts/common.sh@339 -- # ver1_l=2 00:07:38.323 15:49:49 -- scripts/common.sh@340 -- # ver2_l=1 00:07:38.323 15:49:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:38.323 15:49:49 -- scripts/common.sh@343 -- # case "$op" in 00:07:38.323 15:49:49 -- scripts/common.sh@344 -- # : 1 00:07:38.323 15:49:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:38.323 15:49:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.323 15:49:49 -- scripts/common.sh@364 -- # decimal 1 00:07:38.323 15:49:49 -- scripts/common.sh@352 -- # local d=1 00:07:38.323 15:49:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.323 15:49:49 -- scripts/common.sh@354 -- # echo 1 00:07:38.323 15:49:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:38.323 15:49:49 -- scripts/common.sh@365 -- # decimal 2 00:07:38.323 15:49:49 -- scripts/common.sh@352 -- # local d=2 00:07:38.323 15:49:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.323 15:49:49 -- scripts/common.sh@354 -- # echo 2 00:07:38.323 15:49:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:38.323 15:49:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:38.323 15:49:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:38.323 15:49:49 -- scripts/common.sh@367 -- # return 0 00:07:38.323 15:49:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.323 15:49:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:38.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.323 --rc genhtml_branch_coverage=1 00:07:38.323 --rc genhtml_function_coverage=1 00:07:38.323 --rc genhtml_legend=1 00:07:38.323 --rc geninfo_all_blocks=1 00:07:38.323 --rc geninfo_unexecuted_blocks=1 00:07:38.323 00:07:38.323 ' 00:07:38.323 15:49:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:38.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.323 --rc genhtml_branch_coverage=1 00:07:38.323 --rc genhtml_function_coverage=1 00:07:38.323 --rc genhtml_legend=1 00:07:38.323 --rc geninfo_all_blocks=1 00:07:38.323 --rc geninfo_unexecuted_blocks=1 00:07:38.323 00:07:38.323 ' 00:07:38.323 15:49:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:38.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.323 --rc genhtml_branch_coverage=1 00:07:38.323 --rc genhtml_function_coverage=1 00:07:38.323 --rc genhtml_legend=1 00:07:38.323 --rc geninfo_all_blocks=1 00:07:38.323 --rc geninfo_unexecuted_blocks=1 00:07:38.323 00:07:38.323 ' 00:07:38.323 15:49:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:38.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.323 --rc genhtml_branch_coverage=1 00:07:38.323 --rc genhtml_function_coverage=1 00:07:38.323 --rc genhtml_legend=1 00:07:38.323 --rc geninfo_all_blocks=1 00:07:38.323 --rc geninfo_unexecuted_blocks=1 00:07:38.323 00:07:38.323 ' 00:07:38.323 15:49:49 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:38.323 15:49:49 -- bdev/nbd_common.sh@6 -- # set -e 00:07:38.323 15:49:49 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:38.323 15:49:49 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:38.323 15:49:49 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:38.323 15:49:49 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:38.323 15:49:49 -- bdev/blockdev.sh@18 -- # : 00:07:38.323 15:49:49 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:38.323 15:49:49 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:38.323 15:49:49 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:38.323 15:49:49 -- bdev/blockdev.sh@672 -- # uname -s 00:07:38.323 15:49:49 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:38.323 15:49:49 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:38.323 15:49:49 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:07:38.323 15:49:49 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:38.323 15:49:49 -- bdev/blockdev.sh@682 -- # dek= 00:07:38.323 15:49:49 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:38.323 15:49:49 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:38.323 15:49:49 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:38.323 15:49:49 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:07:38.323 15:49:49 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:07:38.323 15:49:49 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:38.323 15:49:49 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60131 00:07:38.323 15:49:49 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:38.323 15:49:49 -- bdev/blockdev.sh@47 -- # waitforlisten 60131 00:07:38.323 15:49:49 -- common/autotest_common.sh@829 -- # '[' -z 60131 ']' 00:07:38.323 15:49:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.323 15:49:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.324 15:49:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.324 15:49:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.324 15:49:49 -- common/autotest_common.sh@10 -- # set +x 00:07:38.324 15:49:49 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:38.324 [2024-11-29 15:49:49.628265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.324 [2024-11-29 15:49:49.628383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60131 ] 00:07:38.582 [2024-11-29 15:49:49.775823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.582 [2024-11-29 15:49:49.913711] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:38.582 [2024-11-29 15:49:49.913866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.200 15:49:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.200 15:49:50 -- common/autotest_common.sh@862 -- # return 0 00:07:39.200 15:49:50 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:39.200 15:49:50 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:07:39.200 15:49:50 -- bdev/blockdev.sh@79 -- # local json 00:07:39.200 15:49:50 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:39.200 15:49:50 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:39.200 15:49:50 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:39.200 15:49:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.200 15:49:50 -- common/autotest_common.sh@10 -- # set +x 00:07:39.471 15:49:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.471 15:49:50 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:39.471 15:49:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.471 15:49:50 -- common/autotest_common.sh@10 -- # set +x 00:07:39.471 15:49:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.471 15:49:50 -- bdev/blockdev.sh@738 -- # cat 00:07:39.471 15:49:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:39.471 15:49:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.471 15:49:50 -- common/autotest_common.sh@10 -- # set +x 00:07:39.471 15:49:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.471 15:49:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:39.471 15:49:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.471 15:49:50 -- common/autotest_common.sh@10 -- # set +x 00:07:39.471 15:49:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.471 15:49:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:39.471 15:49:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.471 15:49:50 -- common/autotest_common.sh@10 -- # set +x 00:07:39.471 15:49:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.471 15:49:50 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:39.471 15:49:50 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:39.471 15:49:50 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:39.471 15:49:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.471 15:49:50 -- common/autotest_common.sh@10 -- # set +x 00:07:39.471 15:49:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.471 15:49:50 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:39.471 15:49:50 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:39.471 15:49:50 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "11dfdbbb-b5a1-41a9-be1b-fb9d610ed239"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "11dfdbbb-b5a1-41a9-be1b-fb9d610ed239",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "6cf7e3b0-6883-47d6-91d8-7ec7a931b4e6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6cf7e3b0-6883-47d6-91d8-7ec7a931b4e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "dbe19cb3-a6ac-4060-b43f-5f6710bcc492"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dbe19cb3-a6ac-4060-b43f-5f6710bcc492",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f4aa09f1-1072-4e4c-9f9a-eace84aaf277"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f4aa09f1-1072-4e4c-9f9a-eace84aaf277",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "009c1536-5144-4451-8086-128c18ac8e73"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "009c1536-5144-4451-8086-128c18ac8e73",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d6069588-3578-4728-8ed3-dc7bab7f602e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d6069588-3578-4728-8ed3-dc7bab7f602e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:39.471 15:49:50 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:39.471 15:49:50 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:07:39.471 15:49:50 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:39.471 15:49:50 -- bdev/blockdev.sh@752 -- # killprocess 60131 00:07:39.471 15:49:50 -- common/autotest_common.sh@936 -- # '[' -z 60131 ']' 00:07:39.471 15:49:50 -- common/autotest_common.sh@940 -- # kill -0 60131 00:07:39.472 15:49:50 -- common/autotest_common.sh@941 -- # uname 00:07:39.472 15:49:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:39.472 15:49:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60131 00:07:39.472 15:49:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:39.472 15:49:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:39.472 killing process with pid 60131 00:07:39.472 15:49:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60131' 00:07:39.472 15:49:50 -- common/autotest_common.sh@955 -- # kill 60131 00:07:39.472 15:49:50 -- common/autotest_common.sh@960 -- # wait 60131 00:07:40.845 15:49:52 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:40.845 15:49:52 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:40.845 15:49:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:40.845 15:49:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.845 15:49:52 -- common/autotest_common.sh@10 -- # set +x 00:07:40.845 ************************************ 00:07:40.845 START TEST bdev_hello_world 00:07:40.845 ************************************ 00:07:40.845 15:49:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:40.845 [2024-11-29 15:49:52.094405] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.845 [2024-11-29 15:49:52.094519] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60210 ] 00:07:40.845 [2024-11-29 15:49:52.239243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.102 [2024-11-29 15:49:52.386269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.669 [2024-11-29 15:49:52.849550] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:41.669 [2024-11-29 15:49:52.849589] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:41.669 [2024-11-29 15:49:52.849605] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:41.669 [2024-11-29 15:49:52.851499] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:41.669 [2024-11-29 15:49:52.851868] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:41.669 [2024-11-29 15:49:52.851890] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:41.669 [2024-11-29 15:49:52.852055] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:41.669 00:07:41.669 [2024-11-29 15:49:52.852076] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:42.236 00:07:42.236 real 0m1.440s 00:07:42.236 user 0m1.185s 00:07:42.237 sys 0m0.150s 00:07:42.237 15:49:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.237 15:49:53 -- common/autotest_common.sh@10 -- # set +x 00:07:42.237 ************************************ 00:07:42.237 END TEST bdev_hello_world 00:07:42.237 ************************************ 00:07:42.237 15:49:53 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:42.237 15:49:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:42.237 15:49:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.237 15:49:53 -- common/autotest_common.sh@10 -- # set +x 00:07:42.237 ************************************ 00:07:42.237 START TEST bdev_bounds 00:07:42.237 ************************************ 00:07:42.237 15:49:53 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:42.237 15:49:53 -- bdev/blockdev.sh@288 -- # bdevio_pid=60241 00:07:42.237 15:49:53 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:42.237 Process bdevio pid: 60241 00:07:42.237 15:49:53 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 60241' 00:07:42.237 15:49:53 -- bdev/blockdev.sh@291 -- # waitforlisten 60241 00:07:42.237 15:49:53 -- common/autotest_common.sh@829 -- # '[' -z 60241 ']' 00:07:42.237 15:49:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.237 15:49:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.237 15:49:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.237 15:49:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.237 15:49:53 -- common/autotest_common.sh@10 -- # set +x 00:07:42.237 15:49:53 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:42.237 [2024-11-29 15:49:53.590802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.237 [2024-11-29 15:49:53.590917] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60241 ] 00:07:42.496 [2024-11-29 15:49:53.736150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.496 [2024-11-29 15:49:53.882698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.496 [2024-11-29 15:49:53.882889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.496 [2024-11-29 15:49:53.882912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.066 15:49:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.066 15:49:54 -- common/autotest_common.sh@862 -- # return 0 00:07:43.066 15:49:54 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:43.328 I/O targets: 00:07:43.328 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:43.328 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:43.328 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:43.328 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:43.328 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:43.328 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:43.328 00:07:43.328 00:07:43.328 CUnit - A unit testing framework for C - Version 2.1-3 00:07:43.328 http://cunit.sourceforge.net/ 00:07:43.328 00:07:43.328 00:07:43.328 Suite: bdevio tests on: Nvme3n1 00:07:43.328 Test: blockdev write read block ...passed 00:07:43.328 Test: blockdev write zeroes read block ...passed 00:07:43.328 Test: blockdev write zeroes read no split ...passed 00:07:43.328 Test: blockdev write zeroes read split ...passed 00:07:43.328 Test: blockdev write zeroes read split partial ...passed 00:07:43.329 Test: blockdev reset ...[2024-11-29 15:49:54.557925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:43.329 [2024-11-29 15:49:54.561490] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:43.329 passed 00:07:43.329 Test: blockdev write read 8 blocks ...passed 00:07:43.329 Test: blockdev write read size > 128k ...passed 00:07:43.329 Test: blockdev write read invalid size ...passed 00:07:43.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.329 Test: blockdev write read max offset ...passed 00:07:43.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.329 Test: blockdev writev readv 8 blocks ...passed 00:07:43.329 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.329 Test: blockdev writev readv block ...passed 00:07:43.329 Test: blockdev writev readv size > 128k ...passed 00:07:43.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.329 Test: blockdev comparev and writev ...[2024-11-29 15:49:54.582979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26fc0e000 len:0x1000 00:07:43.329 [2024-11-29 15:49:54.583028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:43.329 passed 00:07:43.329 Test: blockdev nvme passthru rw ...passed 00:07:43.329 Test: blockdev nvme passthru vendor specific ...[2024-11-29 15:49:54.585828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:43.329 Test: blockdev nvme admin passthru ...passed 00:07:43.329 Test: blockdev copy ...RP2 0x0 00:07:43.329 [2024-11-29 15:49:54.585937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:43.329 passed 00:07:43.329 Suite: bdevio tests on: Nvme2n3 00:07:43.329 Test: blockdev write read block ...passed 00:07:43.329 Test: blockdev write zeroes read block ...passed 00:07:43.329 Test: blockdev write zeroes read no split ...passed 00:07:43.329 Test: blockdev write zeroes read split ...passed 00:07:43.329 Test: blockdev write zeroes read split partial ...passed 00:07:43.329 Test: blockdev reset ...[2024-11-29 15:49:54.640755] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:43.329 [2024-11-29 15:49:54.644604] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:43.329 passed 00:07:43.329 Test: blockdev write read 8 blocks ...passed 00:07:43.329 Test: blockdev write read size > 128k ...passed 00:07:43.329 Test: blockdev write read invalid size ...passed 00:07:43.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.329 Test: blockdev write read max offset ...passed 00:07:43.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.329 Test: blockdev writev readv 8 blocks ...passed 00:07:43.329 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.329 Test: blockdev writev readv block ...passed 00:07:43.329 Test: blockdev writev readv size > 128k ...passed 00:07:43.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.329 Test: blockdev comparev and writev ...[2024-11-29 15:49:54.664630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26fc0a000 len:0x1000 00:07:43.329 [2024-11-29 15:49:54.664670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:43.329 passed 00:07:43.329 Test: blockdev nvme passthru rw ...passed 00:07:43.329 Test: blockdev nvme passthru vendor specific ...[2024-11-29 15:49:54.667013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:43.329 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:43.329 [2024-11-29 15:49:54.667117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:43.329 passed 00:07:43.329 Test: blockdev copy ...passed 00:07:43.329 Suite: bdevio tests on: Nvme2n2 00:07:43.329 Test: blockdev write read block ...passed 00:07:43.329 Test: blockdev write zeroes read block ...passed 00:07:43.329 Test: blockdev write zeroes read no split ...passed 00:07:43.329 Test: blockdev write zeroes read split ...passed 00:07:43.329 Test: blockdev write zeroes read split partial ...passed 00:07:43.329 Test: blockdev reset ...[2024-11-29 15:49:54.729215] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:43.329 [2024-11-29 15:49:54.734008] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:43.329 passed 00:07:43.329 Test: blockdev write read 8 blocks ...passed 00:07:43.329 Test: blockdev write read size > 128k ...passed 00:07:43.329 Test: blockdev write read invalid size ...passed 00:07:43.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.329 Test: blockdev write read max offset ...passed 00:07:43.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.329 Test: blockdev writev readv 8 blocks ...passed 00:07:43.329 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.329 Test: blockdev writev readv block ...passed 00:07:43.329 Test: blockdev writev readv size > 128k ...passed 00:07:43.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.329 Test: blockdev comparev and writev ...[2024-11-29 15:49:54.755504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x264e06000 len:0x1000 00:07:43.329 [2024-11-29 15:49:54.755541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:43.329 passed 00:07:43.590 Test: blockdev nvme passthru rw ...passed 00:07:43.590 Test: blockdev nvme passthru vendor specific ...[2024-11-29 15:49:54.757989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:43.590 [2024-11-29 15:49:54.758022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:43.590 passed 00:07:43.590 Test: blockdev nvme admin passthru ...passed 00:07:43.590 Test: blockdev copy ...passed 00:07:43.590 Suite: bdevio tests on: Nvme2n1 00:07:43.590 Test: blockdev write read block ...passed 00:07:43.590 Test: blockdev write zeroes read block ...passed 00:07:43.590 Test: blockdev write zeroes read no split ...passed 00:07:43.590 Test: blockdev write zeroes read split ...passed 00:07:43.591 Test: blockdev write zeroes read split partial ...passed 00:07:43.591 Test: blockdev reset ...[2024-11-29 15:49:54.812066] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:43.591 [2024-11-29 15:49:54.816511] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:43.591 passed 00:07:43.591 Test: blockdev write read 8 blocks ...passed 00:07:43.591 Test: blockdev write read size > 128k ...passed 00:07:43.591 Test: blockdev write read invalid size ...passed 00:07:43.591 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.591 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.591 Test: blockdev write read max offset ...passed 00:07:43.591 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.591 Test: blockdev writev readv 8 blocks ...passed 00:07:43.591 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.591 Test: blockdev writev readv block ...passed 00:07:43.591 Test: blockdev writev readv size > 128k ...passed 00:07:43.591 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.591 Test: blockdev comparev and writev ...[2024-11-29 15:49:54.836589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x264e01000 len:0x1000 00:07:43.591 [2024-11-29 15:49:54.836735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:43.591 passed 00:07:43.591 Test: blockdev nvme passthru rw ...passed 00:07:43.591 Test: blockdev nvme passthru vendor specific ...[2024-11-29 15:49:54.838503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:43.591 [2024-11-29 15:49:54.838538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:43.591 passed 00:07:43.591 Test: blockdev nvme admin passthru ...passed 00:07:43.591 Test: blockdev copy ...passed 00:07:43.591 Suite: bdevio tests on: Nvme1n1 00:07:43.591 Test: blockdev write read block ...passed 00:07:43.591 Test: blockdev write zeroes read block ...passed 00:07:43.591 Test: blockdev write zeroes read no split ...passed 00:07:43.591 Test: blockdev write zeroes read split ...passed 00:07:43.591 Test: blockdev write zeroes read split partial ...passed 00:07:43.591 Test: blockdev reset ...[2024-11-29 15:49:54.894567] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:43.591 [2024-11-29 15:49:54.898484] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:43.591 passed 00:07:43.591 Test: blockdev write read 8 blocks ...passed 00:07:43.591 Test: blockdev write read size > 128k ...passed 00:07:43.591 Test: blockdev write read invalid size ...passed 00:07:43.591 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.591 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.591 Test: blockdev write read max offset ...passed 00:07:43.591 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.591 Test: blockdev writev readv 8 blocks ...passed 00:07:43.591 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.591 Test: blockdev writev readv block ...passed 00:07:43.591 Test: blockdev writev readv size > 128k ...passed 00:07:43.591 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.591 Test: blockdev comparev and writev ...[2024-11-29 15:49:54.917306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26a806000 len:0x1000 00:07:43.591 [2024-11-29 15:49:54.917351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:43.591 passed 00:07:43.591 Test: blockdev nvme passthru rw ...passed 00:07:43.591 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.591 Test: blockdev nvme admin passthru ...[2024-11-29 15:49:54.920023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:43.591 [2024-11-29 15:49:54.920060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:43.591 passed 00:07:43.591 Test: blockdev copy ...passed 00:07:43.591 Suite: bdevio tests on: Nvme0n1 00:07:43.591 Test: blockdev write read block ...passed 00:07:43.591 Test: blockdev write zeroes read block ...passed 00:07:43.591 Test: blockdev write zeroes read no split ...passed 00:07:43.591 Test: blockdev write zeroes read split ...passed 00:07:43.591 Test: blockdev write zeroes read split partial ...passed 00:07:43.591 Test: blockdev reset ...[2024-11-29 15:49:54.986054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:43.591 [2024-11-29 15:49:54.990731] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:43.591 passed 00:07:43.591 Test: blockdev write read 8 blocks ...passed 00:07:43.591 Test: blockdev write read size > 128k ...passed 00:07:43.591 Test: blockdev write read invalid size ...passed 00:07:43.591 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.591 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.591 Test: blockdev write read max offset ...passed 00:07:43.591 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.591 Test: blockdev writev readv 8 blocks ...passed 00:07:43.591 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.591 Test: blockdev writev readv block ...passed 00:07:43.591 Test: blockdev writev readv size > 128k ...passed 00:07:43.591 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.591 Test: blockdev comparev and writev ...passed 00:07:43.591 Test: blockdev nvme passthru rw ...[2024-11-29 15:49:55.007997] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:43.591 separate metadata which is not supported yet. 00:07:43.591 passed 00:07:43.591 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.591 Test: blockdev nvme admin passthru ...[2024-11-29 15:49:55.009785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:43.591 [2024-11-29 15:49:55.009827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:43.591 passed 00:07:43.852 Test: blockdev copy ...passed 00:07:43.852 00:07:43.852 Run Summary: Type Total Ran Passed Failed Inactive 00:07:43.852 suites 6 6 n/a 0 0 00:07:43.852 tests 138 138 138 0 0 00:07:43.852 asserts 893 893 893 0 n/a 00:07:43.852 00:07:43.852 Elapsed time = 1.300 seconds 00:07:43.852 0 00:07:43.852 15:49:55 -- bdev/blockdev.sh@293 -- # killprocess 60241 00:07:43.852 15:49:55 -- common/autotest_common.sh@936 -- # '[' -z 60241 ']' 00:07:43.852 15:49:55 -- common/autotest_common.sh@940 -- # kill -0 60241 00:07:43.852 15:49:55 -- common/autotest_common.sh@941 -- # uname 00:07:43.852 15:49:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:43.852 15:49:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60241 00:07:43.852 killing process with pid 60241 00:07:43.852 15:49:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:43.852 15:49:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:43.852 15:49:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60241' 00:07:43.852 15:49:55 -- common/autotest_common.sh@955 -- # kill 60241 00:07:43.852 15:49:55 -- common/autotest_common.sh@960 -- # wait 60241 00:07:44.425 ************************************ 00:07:44.425 END TEST bdev_bounds 00:07:44.425 ************************************ 00:07:44.425 15:49:55 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:44.425 00:07:44.425 real 0m2.274s 00:07:44.425 user 0m5.540s 00:07:44.425 sys 0m0.286s 00:07:44.425 15:49:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.426 15:49:55 -- common/autotest_common.sh@10 -- # set +x 00:07:44.688 15:49:55 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:44.688 15:49:55 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:44.688 15:49:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.688 15:49:55 -- common/autotest_common.sh@10 -- # set +x 00:07:44.688 ************************************ 00:07:44.688 START TEST bdev_nbd 00:07:44.688 ************************************ 00:07:44.688 15:49:55 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:44.688 15:49:55 -- bdev/blockdev.sh@298 -- # uname -s 00:07:44.688 15:49:55 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:44.688 15:49:55 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.688 15:49:55 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:44.688 15:49:55 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.688 15:49:55 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:44.688 15:49:55 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:07:44.688 15:49:55 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:44.688 15:49:55 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:44.688 15:49:55 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:44.688 15:49:55 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:07:44.688 15:49:55 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:44.688 15:49:55 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:44.688 15:49:55 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.688 15:49:55 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:44.688 15:49:55 -- bdev/blockdev.sh@316 -- # nbd_pid=60306 00:07:44.688 15:49:55 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:44.688 15:49:55 -- bdev/blockdev.sh@318 -- # waitforlisten 60306 /var/tmp/spdk-nbd.sock 00:07:44.688 15:49:55 -- common/autotest_common.sh@829 -- # '[' -z 60306 ']' 00:07:44.688 15:49:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:44.688 15:49:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:44.688 15:49:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:44.688 15:49:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.688 15:49:55 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:44.688 15:49:55 -- common/autotest_common.sh@10 -- # set +x 00:07:44.688 [2024-11-29 15:49:55.951203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.688 [2024-11-29 15:49:55.951342] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.688 [2024-11-29 15:49:56.097861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.949 [2024-11-29 15:49:56.335512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.338 15:49:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.338 15:49:57 -- common/autotest_common.sh@862 -- # return 0 00:07:46.338 15:49:57 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@24 -- # local i 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:46.338 15:49:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:46.338 15:49:57 -- common/autotest_common.sh@867 -- # local i 00:07:46.338 15:49:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:46.338 15:49:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:46.338 15:49:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:46.338 15:49:57 -- common/autotest_common.sh@871 -- # break 00:07:46.338 15:49:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:46.338 15:49:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:46.338 15:49:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.338 1+0 records in 00:07:46.338 1+0 records out 00:07:46.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000824075 s, 5.0 MB/s 00:07:46.338 15:49:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.338 15:49:57 -- common/autotest_common.sh@884 -- # size=4096 00:07:46.338 15:49:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.338 15:49:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:46.338 15:49:57 -- common/autotest_common.sh@887 -- # return 0 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:46.338 15:49:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:46.600 15:49:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:46.600 15:49:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:46.600 15:49:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:46.600 15:49:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:46.600 15:49:57 -- common/autotest_common.sh@867 -- # local i 00:07:46.600 15:49:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:46.600 15:49:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:46.600 15:49:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:46.600 15:49:57 -- common/autotest_common.sh@871 -- # break 00:07:46.600 15:49:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:46.600 15:49:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:46.600 15:49:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.600 1+0 records in 00:07:46.600 1+0 records out 00:07:46.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010339 s, 4.0 MB/s 00:07:46.600 15:49:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.600 15:49:57 -- common/autotest_common.sh@884 -- # size=4096 00:07:46.600 15:49:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.600 15:49:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:46.600 15:49:57 -- common/autotest_common.sh@887 -- # return 0 00:07:46.600 15:49:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.600 15:49:57 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:46.600 15:49:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:46.862 15:49:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:46.862 15:49:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:46.862 15:49:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:46.862 15:49:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:46.862 15:49:58 -- common/autotest_common.sh@867 -- # local i 00:07:46.862 15:49:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:46.862 15:49:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:46.862 15:49:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:46.862 15:49:58 -- common/autotest_common.sh@871 -- # break 00:07:46.862 15:49:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:46.862 15:49:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:46.862 15:49:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.862 1+0 records in 00:07:46.862 1+0 records out 00:07:46.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000885046 s, 4.6 MB/s 00:07:46.862 15:49:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.862 15:49:58 -- common/autotest_common.sh@884 -- # size=4096 00:07:46.862 15:49:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.862 15:49:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:46.862 15:49:58 -- common/autotest_common.sh@887 -- # return 0 00:07:46.862 15:49:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.862 15:49:58 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:46.862 15:49:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:47.125 15:49:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:47.125 15:49:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:47.125 15:49:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:47.125 15:49:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:47.125 15:49:58 -- common/autotest_common.sh@867 -- # local i 00:07:47.125 15:49:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.125 15:49:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.125 15:49:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:47.125 15:49:58 -- common/autotest_common.sh@871 -- # break 00:07:47.125 15:49:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.125 15:49:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.125 15:49:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.125 1+0 records in 00:07:47.125 1+0 records out 00:07:47.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879706 s, 4.7 MB/s 00:07:47.125 15:49:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.125 15:49:58 -- common/autotest_common.sh@884 -- # size=4096 00:07:47.125 15:49:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.125 15:49:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.125 15:49:58 -- common/autotest_common.sh@887 -- # return 0 00:07:47.125 15:49:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.125 15:49:58 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:47.125 15:49:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:47.386 15:49:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:47.386 15:49:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:47.386 15:49:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:47.386 15:49:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:47.386 15:49:58 -- common/autotest_common.sh@867 -- # local i 00:07:47.386 15:49:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.386 15:49:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.386 15:49:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:47.386 15:49:58 -- common/autotest_common.sh@871 -- # break 00:07:47.386 15:49:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.386 15:49:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.386 15:49:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.386 1+0 records in 00:07:47.386 1+0 records out 00:07:47.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115616 s, 3.5 MB/s 00:07:47.386 15:49:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.386 15:49:58 -- common/autotest_common.sh@884 -- # size=4096 00:07:47.386 15:49:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.386 15:49:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.386 15:49:58 -- common/autotest_common.sh@887 -- # return 0 00:07:47.386 15:49:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.386 15:49:58 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:47.386 15:49:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:47.386 15:49:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:47.386 15:49:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:47.386 15:49:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:47.386 15:49:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:47.386 15:49:58 -- common/autotest_common.sh@867 -- # local i 00:07:47.386 15:49:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.386 15:49:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.386 15:49:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:47.386 15:49:58 -- common/autotest_common.sh@871 -- # break 00:07:47.386 15:49:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.386 15:49:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.386 15:49:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.386 1+0 records in 00:07:47.386 1+0 records out 00:07:47.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123082 s, 3.3 MB/s 00:07:47.386 15:49:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.646 15:49:58 -- common/autotest_common.sh@884 -- # size=4096 00:07:47.646 15:49:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.646 15:49:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.646 15:49:58 -- common/autotest_common.sh@887 -- # return 0 00:07:47.646 15:49:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.646 15:49:58 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:47.646 15:49:58 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.646 15:49:59 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:47.646 { 00:07:47.646 "nbd_device": "/dev/nbd0", 00:07:47.646 "bdev_name": "Nvme0n1" 00:07:47.646 }, 00:07:47.646 { 00:07:47.646 "nbd_device": "/dev/nbd1", 00:07:47.646 "bdev_name": "Nvme1n1" 00:07:47.646 }, 00:07:47.646 { 00:07:47.646 "nbd_device": "/dev/nbd2", 00:07:47.646 "bdev_name": "Nvme2n1" 00:07:47.646 }, 00:07:47.646 { 00:07:47.646 "nbd_device": "/dev/nbd3", 00:07:47.646 "bdev_name": "Nvme2n2" 00:07:47.646 }, 00:07:47.646 { 00:07:47.646 "nbd_device": "/dev/nbd4", 00:07:47.646 "bdev_name": "Nvme2n3" 00:07:47.646 }, 00:07:47.646 { 00:07:47.646 "nbd_device": "/dev/nbd5", 00:07:47.646 "bdev_name": "Nvme3n1" 00:07:47.646 } 00:07:47.646 ]' 00:07:47.646 15:49:59 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:47.646 15:49:59 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:47.646 { 00:07:47.646 "nbd_device": "/dev/nbd0", 00:07:47.646 "bdev_name": "Nvme0n1" 00:07:47.646 }, 00:07:47.646 { 00:07:47.646 "nbd_device": "/dev/nbd1", 00:07:47.646 "bdev_name": "Nvme1n1" 00:07:47.646 }, 00:07:47.646 { 00:07:47.646 "nbd_device": "/dev/nbd2", 00:07:47.646 "bdev_name": "Nvme2n1" 00:07:47.646 }, 00:07:47.646 { 00:07:47.646 "nbd_device": "/dev/nbd3", 00:07:47.646 "bdev_name": "Nvme2n2" 00:07:47.646 }, 00:07:47.646 { 00:07:47.646 "nbd_device": "/dev/nbd4", 00:07:47.646 "bdev_name": "Nvme2n3" 00:07:47.646 }, 00:07:47.646 { 00:07:47.646 "nbd_device": "/dev/nbd5", 00:07:47.646 "bdev_name": "Nvme3n1" 00:07:47.646 } 00:07:47.646 ]' 00:07:47.646 15:49:59 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:47.646 15:49:59 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:47.646 15:49:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.646 15:49:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:47.646 15:49:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:47.646 15:49:59 -- bdev/nbd_common.sh@51 -- # local i 00:07:47.646 15:49:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.646 15:49:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:47.907 15:49:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:47.907 15:49:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:47.907 15:49:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:47.907 15:49:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.907 15:49:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.907 15:49:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:47.907 15:49:59 -- bdev/nbd_common.sh@41 -- # break 00:07:47.907 15:49:59 -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.907 15:49:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.907 15:49:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:48.168 15:49:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:48.168 15:49:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:48.168 15:49:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:48.168 15:49:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.168 15:49:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.168 15:49:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:48.168 15:49:59 -- bdev/nbd_common.sh@41 -- # break 00:07:48.168 15:49:59 -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.168 15:49:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.168 15:49:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:48.430 15:49:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:48.430 15:49:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:48.430 15:49:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:48.430 15:49:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.430 15:49:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.430 15:49:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:48.430 15:49:59 -- bdev/nbd_common.sh@41 -- # break 00:07:48.430 15:49:59 -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.430 15:49:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.430 15:49:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:48.692 15:49:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:48.692 15:49:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:48.692 15:49:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:48.692 15:49:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.692 15:49:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.692 15:49:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:48.692 15:49:59 -- bdev/nbd_common.sh@41 -- # break 00:07:48.692 15:49:59 -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.692 15:49:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.692 15:49:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:48.692 15:50:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:48.692 15:50:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:48.692 15:50:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:48.692 15:50:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.692 15:50:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.692 15:50:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:48.692 15:50:00 -- bdev/nbd_common.sh@41 -- # break 00:07:48.692 15:50:00 -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.692 15:50:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.692 15:50:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:48.954 15:50:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:48.954 15:50:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:48.954 15:50:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:48.954 15:50:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.954 15:50:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.954 15:50:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:48.954 15:50:00 -- bdev/nbd_common.sh@41 -- # break 00:07:48.954 15:50:00 -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.954 15:50:00 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.954 15:50:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.954 15:50:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:49.215 15:50:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:49.215 15:50:00 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:49.215 15:50:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.215 15:50:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:49.215 15:50:00 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:49.215 15:50:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.215 15:50:00 -- bdev/nbd_common.sh@65 -- # true 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@65 -- # count=0 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@122 -- # count=0 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@127 -- # return 0 00:07:49.216 15:50:00 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@12 -- # local i 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:49.216 15:50:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:49.478 /dev/nbd0 00:07:49.478 15:50:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:49.478 15:50:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:49.478 15:50:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:49.478 15:50:00 -- common/autotest_common.sh@867 -- # local i 00:07:49.478 15:50:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:49.478 15:50:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:49.478 15:50:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:49.478 15:50:00 -- common/autotest_common.sh@871 -- # break 00:07:49.478 15:50:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:49.478 15:50:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:49.478 15:50:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.478 1+0 records in 00:07:49.478 1+0 records out 00:07:49.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107195 s, 3.8 MB/s 00:07:49.478 15:50:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.478 15:50:00 -- common/autotest_common.sh@884 -- # size=4096 00:07:49.478 15:50:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.478 15:50:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:49.478 15:50:00 -- common/autotest_common.sh@887 -- # return 0 00:07:49.478 15:50:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.478 15:50:00 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:49.478 15:50:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:49.738 /dev/nbd1 00:07:49.738 15:50:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:49.738 15:50:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:49.738 15:50:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:49.738 15:50:01 -- common/autotest_common.sh@867 -- # local i 00:07:49.738 15:50:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:49.738 15:50:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:49.738 15:50:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:49.738 15:50:01 -- common/autotest_common.sh@871 -- # break 00:07:49.738 15:50:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:49.738 15:50:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:49.738 15:50:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.738 1+0 records in 00:07:49.738 1+0 records out 00:07:49.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119583 s, 3.4 MB/s 00:07:49.738 15:50:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.738 15:50:01 -- common/autotest_common.sh@884 -- # size=4096 00:07:49.738 15:50:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.738 15:50:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:49.738 15:50:01 -- common/autotest_common.sh@887 -- # return 0 00:07:49.738 15:50:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.738 15:50:01 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:49.738 15:50:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:49.995 /dev/nbd10 00:07:49.995 15:50:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:49.995 15:50:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:49.995 15:50:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:49.995 15:50:01 -- common/autotest_common.sh@867 -- # local i 00:07:49.995 15:50:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:49.995 15:50:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:49.995 15:50:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:49.995 15:50:01 -- common/autotest_common.sh@871 -- # break 00:07:49.995 15:50:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:49.995 15:50:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:49.995 15:50:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.995 1+0 records in 00:07:49.995 1+0 records out 00:07:49.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675921 s, 6.1 MB/s 00:07:49.995 15:50:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.995 15:50:01 -- common/autotest_common.sh@884 -- # size=4096 00:07:49.995 15:50:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.996 15:50:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:49.996 15:50:01 -- common/autotest_common.sh@887 -- # return 0 00:07:49.996 15:50:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.996 15:50:01 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:49.996 15:50:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:49.996 /dev/nbd11 00:07:50.252 15:50:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:50.252 15:50:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:50.252 15:50:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:50.252 15:50:01 -- common/autotest_common.sh@867 -- # local i 00:07:50.252 15:50:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:50.252 15:50:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:50.252 15:50:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:50.252 15:50:01 -- common/autotest_common.sh@871 -- # break 00:07:50.252 15:50:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:50.252 15:50:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:50.252 15:50:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:50.252 1+0 records in 00:07:50.252 1+0 records out 00:07:50.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485534 s, 8.4 MB/s 00:07:50.252 15:50:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.252 15:50:01 -- common/autotest_common.sh@884 -- # size=4096 00:07:50.252 15:50:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.252 15:50:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:50.252 15:50:01 -- common/autotest_common.sh@887 -- # return 0 00:07:50.252 15:50:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.252 15:50:01 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:50.252 15:50:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:50.252 /dev/nbd12 00:07:50.252 15:50:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:50.252 15:50:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:50.252 15:50:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:50.252 15:50:01 -- common/autotest_common.sh@867 -- # local i 00:07:50.252 15:50:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:50.252 15:50:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:50.252 15:50:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:50.252 15:50:01 -- common/autotest_common.sh@871 -- # break 00:07:50.252 15:50:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:50.252 15:50:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:50.252 15:50:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:50.252 1+0 records in 00:07:50.252 1+0 records out 00:07:50.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053486 s, 7.7 MB/s 00:07:50.252 15:50:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.252 15:50:01 -- common/autotest_common.sh@884 -- # size=4096 00:07:50.252 15:50:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.252 15:50:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:50.252 15:50:01 -- common/autotest_common.sh@887 -- # return 0 00:07:50.253 15:50:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.253 15:50:01 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:50.253 15:50:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:50.509 /dev/nbd13 00:07:50.509 15:50:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:50.509 15:50:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:50.509 15:50:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:50.509 15:50:01 -- common/autotest_common.sh@867 -- # local i 00:07:50.509 15:50:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:50.509 15:50:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:50.509 15:50:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:50.509 15:50:01 -- common/autotest_common.sh@871 -- # break 00:07:50.509 15:50:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:50.509 15:50:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:50.509 15:50:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:50.509 1+0 records in 00:07:50.509 1+0 records out 00:07:50.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328645 s, 12.5 MB/s 00:07:50.509 15:50:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.509 15:50:01 -- common/autotest_common.sh@884 -- # size=4096 00:07:50.509 15:50:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.509 15:50:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:50.509 15:50:01 -- common/autotest_common.sh@887 -- # return 0 00:07:50.509 15:50:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.509 15:50:01 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:50.509 15:50:01 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:50.509 15:50:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.509 15:50:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:50.766 { 00:07:50.766 "nbd_device": "/dev/nbd0", 00:07:50.766 "bdev_name": "Nvme0n1" 00:07:50.766 }, 00:07:50.766 { 00:07:50.766 "nbd_device": "/dev/nbd1", 00:07:50.766 "bdev_name": "Nvme1n1" 00:07:50.766 }, 00:07:50.766 { 00:07:50.766 "nbd_device": "/dev/nbd10", 00:07:50.766 "bdev_name": "Nvme2n1" 00:07:50.766 }, 00:07:50.766 { 00:07:50.766 "nbd_device": "/dev/nbd11", 00:07:50.766 "bdev_name": "Nvme2n2" 00:07:50.766 }, 00:07:50.766 { 00:07:50.766 "nbd_device": "/dev/nbd12", 00:07:50.766 "bdev_name": "Nvme2n3" 00:07:50.766 }, 00:07:50.766 { 00:07:50.766 "nbd_device": "/dev/nbd13", 00:07:50.766 "bdev_name": "Nvme3n1" 00:07:50.766 } 00:07:50.766 ]' 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:50.766 { 00:07:50.766 "nbd_device": "/dev/nbd0", 00:07:50.766 "bdev_name": "Nvme0n1" 00:07:50.766 }, 00:07:50.766 { 00:07:50.766 "nbd_device": "/dev/nbd1", 00:07:50.766 "bdev_name": "Nvme1n1" 00:07:50.766 }, 00:07:50.766 { 00:07:50.766 "nbd_device": "/dev/nbd10", 00:07:50.766 "bdev_name": "Nvme2n1" 00:07:50.766 }, 00:07:50.766 { 00:07:50.766 "nbd_device": "/dev/nbd11", 00:07:50.766 "bdev_name": "Nvme2n2" 00:07:50.766 }, 00:07:50.766 { 00:07:50.766 "nbd_device": "/dev/nbd12", 00:07:50.766 "bdev_name": "Nvme2n3" 00:07:50.766 }, 00:07:50.766 { 00:07:50.766 "nbd_device": "/dev/nbd13", 00:07:50.766 "bdev_name": "Nvme3n1" 00:07:50.766 } 00:07:50.766 ]' 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:50.766 /dev/nbd1 00:07:50.766 /dev/nbd10 00:07:50.766 /dev/nbd11 00:07:50.766 /dev/nbd12 00:07:50.766 /dev/nbd13' 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:50.766 /dev/nbd1 00:07:50.766 /dev/nbd10 00:07:50.766 /dev/nbd11 00:07:50.766 /dev/nbd12 00:07:50.766 /dev/nbd13' 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@65 -- # count=6 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@95 -- # count=6 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:50.766 15:50:02 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:50.766 256+0 records in 00:07:50.767 256+0 records out 00:07:50.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011958 s, 87.7 MB/s 00:07:50.767 15:50:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.767 15:50:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:50.767 256+0 records in 00:07:50.767 256+0 records out 00:07:50.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0949039 s, 11.0 MB/s 00:07:50.767 15:50:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.767 15:50:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:51.025 256+0 records in 00:07:51.025 256+0 records out 00:07:51.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0768509 s, 13.6 MB/s 00:07:51.025 15:50:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.025 15:50:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:51.025 256+0 records in 00:07:51.025 256+0 records out 00:07:51.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133204 s, 7.9 MB/s 00:07:51.025 15:50:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.025 15:50:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:51.282 256+0 records in 00:07:51.282 256+0 records out 00:07:51.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14504 s, 7.2 MB/s 00:07:51.282 15:50:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.282 15:50:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:51.282 256+0 records in 00:07:51.282 256+0 records out 00:07:51.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153568 s, 6.8 MB/s 00:07:51.282 15:50:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.282 15:50:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:51.540 256+0 records in 00:07:51.541 256+0 records out 00:07:51.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0935823 s, 11.2 MB/s 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@51 -- # local i 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.541 15:50:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:51.799 15:50:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:51.799 15:50:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:51.799 15:50:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:51.799 15:50:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.799 15:50:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.799 15:50:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:51.799 15:50:03 -- bdev/nbd_common.sh@41 -- # break 00:07:51.799 15:50:03 -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.799 15:50:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.799 15:50:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:52.056 15:50:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@41 -- # break 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@41 -- # break 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.057 15:50:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:52.371 15:50:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:52.371 15:50:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:52.371 15:50:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:52.371 15:50:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.371 15:50:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.371 15:50:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:52.371 15:50:03 -- bdev/nbd_common.sh@41 -- # break 00:07:52.371 15:50:03 -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.371 15:50:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.371 15:50:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:52.629 15:50:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:52.629 15:50:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:52.629 15:50:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:52.629 15:50:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.629 15:50:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.629 15:50:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:52.629 15:50:03 -- bdev/nbd_common.sh@41 -- # break 00:07:52.629 15:50:03 -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.629 15:50:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.629 15:50:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:52.629 15:50:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:52.629 15:50:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:52.629 15:50:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:52.629 15:50:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.629 15:50:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.629 15:50:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:52.629 15:50:04 -- bdev/nbd_common.sh@41 -- # break 00:07:52.629 15:50:04 -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.629 15:50:04 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:52.629 15:50:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.629 15:50:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@65 -- # true 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@65 -- # count=0 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@104 -- # count=0 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@109 -- # return 0 00:07:52.888 15:50:04 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:52.888 15:50:04 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:53.146 malloc_lvol_verify 00:07:53.146 15:50:04 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:53.406 bb0e8e82-69b9-4053-b010-178669f59746 00:07:53.406 15:50:04 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:53.667 faf95663-471a-44bb-bcf2-bd09bab35290 00:07:53.667 15:50:04 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:53.667 /dev/nbd0 00:07:53.667 15:50:05 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:53.667 mke2fs 1.47.0 (5-Feb-2023) 00:07:53.667 Discarding device blocks: 0/4096 done 00:07:53.667 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:53.667 00:07:53.667 Allocating group tables: 0/1 done 00:07:53.667 Writing inode tables: 0/1 done 00:07:53.926 Creating journal (1024 blocks): done 00:07:53.926 Writing superblocks and filesystem accounting information: 0/1 done 00:07:53.926 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@51 -- # local i 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@41 -- # break 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:53.926 15:50:05 -- bdev/nbd_common.sh@147 -- # return 0 00:07:53.926 15:50:05 -- bdev/blockdev.sh@324 -- # killprocess 60306 00:07:53.926 15:50:05 -- common/autotest_common.sh@936 -- # '[' -z 60306 ']' 00:07:53.926 15:50:05 -- common/autotest_common.sh@940 -- # kill -0 60306 00:07:53.926 15:50:05 -- common/autotest_common.sh@941 -- # uname 00:07:53.926 15:50:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:53.926 15:50:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60306 00:07:53.926 killing process with pid 60306 00:07:53.926 15:50:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:53.926 15:50:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:53.926 15:50:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60306' 00:07:53.926 15:50:05 -- common/autotest_common.sh@955 -- # kill 60306 00:07:53.926 15:50:05 -- common/autotest_common.sh@960 -- # wait 60306 00:07:54.865 15:50:06 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:07:54.865 00:07:54.865 real 0m10.339s 00:07:54.865 user 0m14.386s 00:07:54.865 sys 0m3.025s 00:07:54.865 15:50:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.865 ************************************ 00:07:54.865 END TEST bdev_nbd 00:07:54.865 ************************************ 00:07:54.866 15:50:06 -- common/autotest_common.sh@10 -- # set +x 00:07:54.866 15:50:06 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:07:54.866 15:50:06 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:07:54.866 skipping fio tests on NVMe due to multi-ns failures. 00:07:54.866 15:50:06 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:54.866 15:50:06 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:54.866 15:50:06 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:54.866 15:50:06 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:54.866 15:50:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.866 15:50:06 -- common/autotest_common.sh@10 -- # set +x 00:07:54.866 ************************************ 00:07:54.866 START TEST bdev_verify 00:07:54.866 ************************************ 00:07:54.866 15:50:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:55.126 [2024-11-29 15:50:06.341949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:55.126 [2024-11-29 15:50:06.342075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60686 ] 00:07:55.126 [2024-11-29 15:50:06.488480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:55.386 [2024-11-29 15:50:06.677585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.386 [2024-11-29 15:50:06.677698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.958 Running I/O for 5 seconds... 00:08:01.234 00:08:01.234 Latency(us) 00:08:01.234 [2024-11-29T15:50:12.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.234 [2024-11-29T15:50:12.665Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:01.234 Verification LBA range: start 0x0 length 0xbd0bd 00:08:01.234 Nvme0n1 : 5.04 2858.34 11.17 0.00 0.00 44634.06 9275.86 63317.86 00:08:01.234 [2024-11-29T15:50:12.665Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:01.234 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:01.234 Nvme0n1 : 5.05 2873.04 11.22 0.00 0.00 44414.77 10032.05 64527.75 00:08:01.234 [2024-11-29T15:50:12.665Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:01.234 Verification LBA range: start 0x0 length 0xa0000 00:08:01.234 Nvme1n1 : 5.04 2857.57 11.16 0.00 0.00 44618.06 9679.16 62914.56 00:08:01.234 [2024-11-29T15:50:12.665Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:01.234 Verification LBA range: start 0xa0000 length 0xa0000 00:08:01.234 Nvme1n1 : 5.05 2872.31 11.22 0.00 0.00 44321.63 10132.87 54041.99 00:08:01.234 [2024-11-29T15:50:12.665Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:01.234 Verification LBA range: start 0x0 length 0x80000 00:08:01.234 Nvme2n1 : 5.05 2862.65 11.18 0.00 0.00 44443.97 4385.87 56461.78 00:08:01.234 [2024-11-29T15:50:12.665Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:01.234 Verification LBA range: start 0x80000 length 0x80000 00:08:01.234 Nvme2n1 : 5.06 2874.76 11.23 0.00 0.00 44226.62 4461.49 52428.80 00:08:01.234 [2024-11-29T15:50:12.665Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:01.234 Verification LBA range: start 0x0 length 0x80000 00:08:01.234 Nvme2n2 : 5.05 2860.74 11.17 0.00 0.00 44411.44 6755.25 57671.68 00:08:01.234 [2024-11-29T15:50:12.665Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:01.234 Verification LBA range: start 0x80000 length 0x80000 00:08:01.234 Nvme2n2 : 5.07 2871.51 11.22 0.00 0.00 44213.02 9023.80 51622.20 00:08:01.234 [2024-11-29T15:50:12.665Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:01.234 Verification LBA range: start 0x0 length 0x80000 00:08:01.234 Nvme2n3 : 5.06 2866.42 11.20 0.00 0.00 44306.68 3654.89 56058.49 00:08:01.234 [2024-11-29T15:50:12.665Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:01.234 Verification LBA range: start 0x80000 length 0x80000 00:08:01.234 Nvme2n3 : 5.07 2874.86 11.23 0.00 0.00 44118.11 2634.04 52428.80 00:08:01.234 [2024-11-29T15:50:12.665Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:01.234 Verification LBA range: start 0x0 length 0x20000 00:08:01.234 Nvme3n1 : 5.07 2863.36 11.18 0.00 0.00 44297.89 7763.50 56865.08 00:08:01.234 [2024-11-29T15:50:12.665Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:01.234 Verification LBA range: start 0x20000 length 0x20000 00:08:01.234 Nvme3n1 : 5.08 2874.12 11.23 0.00 0.00 44083.53 3163.37 52025.50 00:08:01.234 [2024-11-29T15:50:12.665Z] =================================================================================================================== 00:08:01.234 [2024-11-29T15:50:12.665Z] Total : 34409.66 134.41 0.00 0.00 44340.18 2634.04 64527.75 00:08:27.852 00:08:27.852 real 0m30.750s 00:08:27.852 user 1m0.067s 00:08:27.852 sys 0m0.385s 00:08:27.852 15:50:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:27.852 ************************************ 00:08:27.852 END TEST bdev_verify 00:08:27.852 ************************************ 00:08:27.852 15:50:37 -- common/autotest_common.sh@10 -- # set +x 00:08:27.852 15:50:37 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:27.852 15:50:37 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:27.852 15:50:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.852 15:50:37 -- common/autotest_common.sh@10 -- # set +x 00:08:27.852 ************************************ 00:08:27.852 START TEST bdev_verify_big_io 00:08:27.852 ************************************ 00:08:27.852 15:50:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:27.852 [2024-11-29 15:50:37.160416] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:27.852 [2024-11-29 15:50:37.160530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60892 ] 00:08:27.852 [2024-11-29 15:50:37.307522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:27.852 [2024-11-29 15:50:37.486147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.852 [2024-11-29 15:50:37.486221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.852 Running I/O for 5 seconds... 00:08:33.107 00:08:33.108 Latency(us) 00:08:33.108 [2024-11-29T15:50:44.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.108 [2024-11-29T15:50:44.539Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:33.108 Verification LBA range: start 0x0 length 0xbd0b 00:08:33.108 Nvme0n1 : 5.30 295.47 18.47 0.00 0.00 421705.16 81466.29 851766.35 00:08:33.108 [2024-11-29T15:50:44.539Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:33.108 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:33.108 Nvme0n1 : 5.30 278.75 17.42 0.00 0.00 447620.50 67350.84 809823.31 00:08:33.108 [2024-11-29T15:50:44.539Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:33.108 Verification LBA range: start 0x0 length 0xa000 00:08:33.108 Nvme1n1 : 5.37 299.80 18.74 0.00 0.00 408463.69 64931.05 771106.66 00:08:33.108 [2024-11-29T15:50:44.539Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:33.108 Verification LBA range: start 0xa000 length 0xa000 00:08:33.108 Nvme1n1 : 5.36 284.49 17.78 0.00 0.00 433323.69 52025.50 738842.78 00:08:33.108 [2024-11-29T15:50:44.539Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:33.108 Verification LBA range: start 0x0 length 0x8000 00:08:33.108 Nvme2n1 : 5.41 306.50 19.16 0.00 0.00 394261.74 40329.85 693673.35 00:08:33.108 [2024-11-29T15:50:44.539Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:33.108 Verification LBA range: start 0x8000 length 0x8000 00:08:33.108 Nvme2n1 : 5.40 290.20 18.14 0.00 0.00 418422.73 40733.14 671088.64 00:08:33.108 [2024-11-29T15:50:44.539Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:33.108 Verification LBA range: start 0x0 length 0x8000 00:08:33.108 Nvme2n2 : 5.44 313.52 19.59 0.00 0.00 379786.01 29844.09 613013.66 00:08:33.108 [2024-11-29T15:50:44.539Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:33.108 Verification LBA range: start 0x8000 length 0x8000 00:08:33.108 Nvme2n2 : 5.42 296.87 18.55 0.00 0.00 403149.23 21778.12 603334.50 00:08:33.108 [2024-11-29T15:50:44.539Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:33.108 Verification LBA range: start 0x0 length 0x8000 00:08:33.108 Nvme2n3 : 5.50 325.00 20.31 0.00 0.00 360112.75 25609.45 535580.36 00:08:33.108 [2024-11-29T15:50:44.539Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:33.108 Verification LBA range: start 0x8000 length 0x8000 00:08:33.108 Nvme2n3 : 5.45 304.62 19.04 0.00 0.00 386736.21 20971.52 532353.97 00:08:33.108 [2024-11-29T15:50:44.539Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:33.108 Verification LBA range: start 0x0 length 0x2000 00:08:33.108 Nvme3n1 : 5.52 346.09 21.63 0.00 0.00 333296.35 3226.39 458147.05 00:08:33.108 [2024-11-29T15:50:44.539Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:33.108 Verification LBA range: start 0x2000 length 0x2000 00:08:33.108 Nvme3n1 : 5.50 332.28 20.77 0.00 0.00 350007.66 431.66 464599.83 00:08:33.108 [2024-11-29T15:50:44.539Z] =================================================================================================================== 00:08:33.108 [2024-11-29T15:50:44.539Z] Total : 3673.59 229.60 0.00 0.00 392301.25 431.66 851766.35 00:08:34.482 00:08:34.482 real 0m8.501s 00:08:34.482 user 0m15.980s 00:08:34.482 sys 0m0.236s 00:08:34.482 15:50:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.482 ************************************ 00:08:34.482 END TEST bdev_verify_big_io 00:08:34.482 ************************************ 00:08:34.482 15:50:45 -- common/autotest_common.sh@10 -- # set +x 00:08:34.482 15:50:45 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:34.482 15:50:45 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:34.482 15:50:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.482 15:50:45 -- common/autotest_common.sh@10 -- # set +x 00:08:34.482 ************************************ 00:08:34.482 START TEST bdev_write_zeroes 00:08:34.482 ************************************ 00:08:34.482 15:50:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:34.482 [2024-11-29 15:50:45.717322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:34.482 [2024-11-29 15:50:45.717424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61013 ] 00:08:34.482 [2024-11-29 15:50:45.867142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.740 [2024-11-29 15:50:46.013893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.307 Running I/O for 1 seconds... 00:08:36.241 00:08:36.241 Latency(us) 00:08:36.241 [2024-11-29T15:50:47.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.241 [2024-11-29T15:50:47.672Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:36.241 Nvme0n1 : 1.01 12044.91 47.05 0.00 0.00 10596.55 4738.76 26012.75 00:08:36.241 [2024-11-29T15:50:47.672Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:36.241 Nvme1n1 : 1.01 12031.16 47.00 0.00 0.00 10595.02 7158.55 18955.03 00:08:36.241 [2024-11-29T15:50:47.672Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:36.242 Nvme2n1 : 1.01 12046.77 47.06 0.00 0.00 10540.77 7057.72 18350.08 00:08:36.242 [2024-11-29T15:50:47.673Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:36.242 Nvme2n2 : 1.02 12075.52 47.17 0.00 0.00 10500.88 4940.41 18350.08 00:08:36.242 [2024-11-29T15:50:47.673Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:36.242 Nvme2n3 : 1.02 12061.95 47.12 0.00 0.00 10497.51 5192.47 19257.50 00:08:36.242 [2024-11-29T15:50:47.673Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:36.242 Nvme3n1 : 1.02 12048.42 47.06 0.00 0.00 10489.18 5620.97 19862.45 00:08:36.242 [2024-11-29T15:50:47.673Z] =================================================================================================================== 00:08:36.242 [2024-11-29T15:50:47.673Z] Total : 72308.72 282.46 0.00 0.00 10536.44 4738.76 26012.75 00:08:37.187 00:08:37.187 real 0m2.679s 00:08:37.187 user 0m2.383s 00:08:37.187 sys 0m0.182s 00:08:37.187 15:50:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.187 ************************************ 00:08:37.187 END TEST bdev_write_zeroes 00:08:37.187 ************************************ 00:08:37.187 15:50:48 -- common/autotest_common.sh@10 -- # set +x 00:08:37.187 15:50:48 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:37.187 15:50:48 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:37.187 15:50:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.187 15:50:48 -- common/autotest_common.sh@10 -- # set +x 00:08:37.187 ************************************ 00:08:37.187 START TEST bdev_json_nonenclosed 00:08:37.187 ************************************ 00:08:37.187 15:50:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:37.187 [2024-11-29 15:50:48.438667] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:37.187 [2024-11-29 15:50:48.438768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61055 ] 00:08:37.187 [2024-11-29 15:50:48.589403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.448 [2024-11-29 15:50:48.764860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.448 [2024-11-29 15:50:48.765011] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:37.448 [2024-11-29 15:50:48.765028] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:37.710 00:08:37.710 real 0m0.660s 00:08:37.710 user 0m0.456s 00:08:37.710 sys 0m0.099s 00:08:37.710 15:50:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.710 15:50:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.710 ************************************ 00:08:37.710 END TEST bdev_json_nonenclosed 00:08:37.710 ************************************ 00:08:37.710 15:50:49 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:37.710 15:50:49 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:37.710 15:50:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.710 15:50:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.710 ************************************ 00:08:37.710 START TEST bdev_json_nonarray 00:08:37.710 ************************************ 00:08:37.710 15:50:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:37.971 [2024-11-29 15:50:49.161559] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:37.971 [2024-11-29 15:50:49.161695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61086 ] 00:08:37.971 [2024-11-29 15:50:49.314207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.232 [2024-11-29 15:50:49.540786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.233 [2024-11-29 15:50:49.541007] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:38.233 [2024-11-29 15:50:49.541030] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:38.493 ************************************ 00:08:38.493 END TEST bdev_json_nonarray 00:08:38.493 ************************************ 00:08:38.493 00:08:38.493 real 0m0.751s 00:08:38.493 user 0m0.529s 00:08:38.493 sys 0m0.115s 00:08:38.493 15:50:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.493 15:50:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.493 15:50:49 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:38.493 15:50:49 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:38.493 15:50:49 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:38.493 15:50:49 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:38.493 15:50:49 -- bdev/blockdev.sh@809 -- # cleanup 00:08:38.494 15:50:49 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:38.494 15:50:49 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:38.494 15:50:49 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:38.494 15:50:49 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:38.494 15:50:49 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:38.494 15:50:49 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:38.494 00:08:38.494 real 1m0.505s 00:08:38.494 user 1m43.247s 00:08:38.494 sys 0m5.134s 00:08:38.494 15:50:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.494 15:50:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.494 ************************************ 00:08:38.494 END TEST blockdev_nvme 00:08:38.494 ************************************ 00:08:38.755 15:50:49 -- spdk/autotest.sh@206 -- # uname -s 00:08:38.755 15:50:49 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:08:38.755 15:50:49 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:38.756 15:50:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:38.756 15:50:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.756 15:50:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.756 ************************************ 00:08:38.756 START TEST blockdev_nvme_gpt 00:08:38.756 ************************************ 00:08:38.756 15:50:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:38.756 * Looking for test storage... 00:08:38.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:38.756 15:50:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:38.756 15:50:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:38.756 15:50:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:38.756 15:50:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:38.756 15:50:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:38.756 15:50:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:38.756 15:50:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:38.756 15:50:50 -- scripts/common.sh@335 -- # IFS=.-: 00:08:38.756 15:50:50 -- scripts/common.sh@335 -- # read -ra ver1 00:08:38.756 15:50:50 -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.756 15:50:50 -- scripts/common.sh@336 -- # read -ra ver2 00:08:38.756 15:50:50 -- scripts/common.sh@337 -- # local 'op=<' 00:08:38.756 15:50:50 -- scripts/common.sh@339 -- # ver1_l=2 00:08:38.756 15:50:50 -- scripts/common.sh@340 -- # ver2_l=1 00:08:38.756 15:50:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:38.756 15:50:50 -- scripts/common.sh@343 -- # case "$op" in 00:08:38.756 15:50:50 -- scripts/common.sh@344 -- # : 1 00:08:38.756 15:50:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:38.756 15:50:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.756 15:50:50 -- scripts/common.sh@364 -- # decimal 1 00:08:38.756 15:50:50 -- scripts/common.sh@352 -- # local d=1 00:08:38.756 15:50:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.756 15:50:50 -- scripts/common.sh@354 -- # echo 1 00:08:38.756 15:50:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:38.756 15:50:50 -- scripts/common.sh@365 -- # decimal 2 00:08:38.756 15:50:50 -- scripts/common.sh@352 -- # local d=2 00:08:38.756 15:50:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.756 15:50:50 -- scripts/common.sh@354 -- # echo 2 00:08:38.756 15:50:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:38.756 15:50:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:38.756 15:50:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:38.756 15:50:50 -- scripts/common.sh@367 -- # return 0 00:08:38.756 15:50:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.756 15:50:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:38.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.756 --rc genhtml_branch_coverage=1 00:08:38.756 --rc genhtml_function_coverage=1 00:08:38.756 --rc genhtml_legend=1 00:08:38.756 --rc geninfo_all_blocks=1 00:08:38.756 --rc geninfo_unexecuted_blocks=1 00:08:38.756 00:08:38.756 ' 00:08:38.756 15:50:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:38.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.756 --rc genhtml_branch_coverage=1 00:08:38.756 --rc genhtml_function_coverage=1 00:08:38.756 --rc genhtml_legend=1 00:08:38.756 --rc geninfo_all_blocks=1 00:08:38.756 --rc geninfo_unexecuted_blocks=1 00:08:38.756 00:08:38.756 ' 00:08:38.756 15:50:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:38.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.756 --rc genhtml_branch_coverage=1 00:08:38.756 --rc genhtml_function_coverage=1 00:08:38.756 --rc genhtml_legend=1 00:08:38.756 --rc geninfo_all_blocks=1 00:08:38.756 --rc geninfo_unexecuted_blocks=1 00:08:38.756 00:08:38.756 ' 00:08:38.756 15:50:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:38.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.756 --rc genhtml_branch_coverage=1 00:08:38.756 --rc genhtml_function_coverage=1 00:08:38.756 --rc genhtml_legend=1 00:08:38.756 --rc geninfo_all_blocks=1 00:08:38.756 --rc geninfo_unexecuted_blocks=1 00:08:38.756 00:08:38.756 ' 00:08:38.756 15:50:50 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:38.756 15:50:50 -- bdev/nbd_common.sh@6 -- # set -e 00:08:38.756 15:50:50 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:38.756 15:50:50 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:38.756 15:50:50 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:38.756 15:50:50 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:38.756 15:50:50 -- bdev/blockdev.sh@18 -- # : 00:08:38.756 15:50:50 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:38.756 15:50:50 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:38.756 15:50:50 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:38.756 15:50:50 -- bdev/blockdev.sh@672 -- # uname -s 00:08:38.756 15:50:50 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:38.756 15:50:50 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:38.756 15:50:50 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:38.756 15:50:50 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:38.756 15:50:50 -- bdev/blockdev.sh@682 -- # dek= 00:08:38.756 15:50:50 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:38.756 15:50:50 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:38.756 15:50:50 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:38.756 15:50:50 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:38.756 15:50:50 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:38.756 15:50:50 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:38.756 15:50:50 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61169 00:08:38.756 15:50:50 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:38.756 15:50:50 -- bdev/blockdev.sh@47 -- # waitforlisten 61169 00:08:38.756 15:50:50 -- common/autotest_common.sh@829 -- # '[' -z 61169 ']' 00:08:38.756 15:50:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.756 15:50:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.756 15:50:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.756 15:50:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.756 15:50:50 -- common/autotest_common.sh@10 -- # set +x 00:08:38.756 15:50:50 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:39.018 [2024-11-29 15:50:50.226958] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:39.018 [2024-11-29 15:50:50.227112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61169 ] 00:08:39.018 [2024-11-29 15:50:50.376119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.280 [2024-11-29 15:50:50.602415] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:39.280 [2024-11-29 15:50:50.602635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.668 15:50:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.668 15:50:51 -- common/autotest_common.sh@862 -- # return 0 00:08:40.668 15:50:51 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:40.668 15:50:51 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:40.668 15:50:51 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:40.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:40.929 Waiting for block devices as requested 00:08:40.929 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:41.190 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:41.190 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:41.190 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:46.464 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:46.464 15:50:57 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:08:46.464 15:50:57 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:46.464 15:50:57 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:46.464 15:50:57 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:46.464 15:50:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:46.464 15:50:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:08:46.464 15:50:57 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:08:46.464 15:50:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:08:46.464 15:50:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:46.464 15:50:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:46.464 15:50:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:46.464 15:50:57 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:46.464 15:50:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:46.464 15:50:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:46.464 15:50:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:46.464 15:50:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:46.464 15:50:57 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:46.464 15:50:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:46.464 15:50:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:46.464 15:50:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:46.464 15:50:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:46.465 15:50:57 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:46.465 15:50:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:46.465 15:50:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:46.465 15:50:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:46.465 15:50:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:46.465 15:50:57 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:46.465 15:50:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:46.465 15:50:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:46.465 15:50:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:46.465 15:50:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:08:46.465 15:50:57 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:08:46.465 15:50:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:46.465 15:50:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:46.465 15:50:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:46.465 15:50:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:08:46.465 15:50:57 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:08:46.465 15:50:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:46.465 15:50:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:46.465 15:50:57 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:08:46.465 15:50:57 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:08:46.465 15:50:57 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:08:46.465 15:50:57 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:46.465 15:50:57 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:08:46.465 15:50:57 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:08:46.465 15:50:57 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:08:46.465 15:50:57 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:08:46.465 BYT; 00:08:46.465 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:46.465 15:50:57 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:08:46.465 BYT; 00:08:46.465 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:46.465 15:50:57 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:08:46.465 15:50:57 -- bdev/blockdev.sh@114 -- # break 00:08:46.465 15:50:57 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:08:46.465 15:50:57 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:46.465 15:50:57 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:46.465 15:50:57 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:46.465 15:50:57 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:08:46.465 15:50:57 -- scripts/common.sh@410 -- # local spdk_guid 00:08:46.465 15:50:57 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:46.465 15:50:57 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:46.465 15:50:57 -- scripts/common.sh@415 -- # IFS='()' 00:08:46.465 15:50:57 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:08:46.465 15:50:57 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:46.465 15:50:57 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:46.465 15:50:57 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:46.465 15:50:57 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:46.465 15:50:57 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:46.465 15:50:57 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:08:46.465 15:50:57 -- scripts/common.sh@422 -- # local spdk_guid 00:08:46.465 15:50:57 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:46.465 15:50:57 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:46.465 15:50:57 -- scripts/common.sh@427 -- # IFS='()' 00:08:46.465 15:50:57 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:08:46.465 15:50:57 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:46.465 15:50:57 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:46.465 15:50:57 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:46.465 15:50:57 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:46.465 15:50:57 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:46.465 15:50:57 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:08:47.399 The operation has completed successfully. 00:08:47.399 15:50:58 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:08:48.332 The operation has completed successfully. 00:08:48.332 15:50:59 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:49.267 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:49.267 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.267 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.267 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.526 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.526 15:51:00 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:08:49.526 15:51:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.526 15:51:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.526 [] 00:08:49.526 15:51:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.526 15:51:00 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:08:49.526 15:51:00 -- bdev/blockdev.sh@79 -- # local json 00:08:49.526 15:51:00 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:49.526 15:51:00 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:49.526 15:51:00 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:49.526 15:51:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.526 15:51:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 15:51:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.785 15:51:01 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:49.785 15:51:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.785 15:51:01 -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 15:51:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.785 15:51:01 -- bdev/blockdev.sh@738 -- # cat 00:08:49.785 15:51:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:49.785 15:51:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.785 15:51:01 -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 15:51:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.785 15:51:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:49.785 15:51:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.785 15:51:01 -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 15:51:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.785 15:51:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:49.785 15:51:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.785 15:51:01 -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 15:51:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.785 15:51:01 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:49.785 15:51:01 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:49.785 15:51:01 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:49.785 15:51:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.785 15:51:01 -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 15:51:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.785 15:51:01 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:49.785 15:51:01 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:49.786 15:51:01 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "14395279-7761-4e92-a9f2-8329aa4bff96"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "14395279-7761-4e92-a9f2-8329aa4bff96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a01beb9e-db7b-4886-af3b-be38833beac2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a01beb9e-db7b-4886-af3b-be38833beac2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "af32f2cf-904b-4a8b-a861-5426f29662c6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "af32f2cf-904b-4a8b-a861-5426f29662c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "110cb87a-2f9e-4c8f-b287-e4f36c58cdfe"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "110cb87a-2f9e-4c8f-b287-e4f36c58cdfe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "929187de-05a0-4554-b716-5cdbee115d6c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "929187de-05a0-4554-b716-5cdbee115d6c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:50.044 15:51:01 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:50.044 15:51:01 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:08:50.044 15:51:01 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:50.044 15:51:01 -- bdev/blockdev.sh@752 -- # killprocess 61169 00:08:50.044 15:51:01 -- common/autotest_common.sh@936 -- # '[' -z 61169 ']' 00:08:50.044 15:51:01 -- common/autotest_common.sh@940 -- # kill -0 61169 00:08:50.044 15:51:01 -- common/autotest_common.sh@941 -- # uname 00:08:50.044 15:51:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:50.044 15:51:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61169 00:08:50.044 killing process with pid 61169 00:08:50.044 15:51:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:50.044 15:51:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:50.044 15:51:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61169' 00:08:50.044 15:51:01 -- common/autotest_common.sh@955 -- # kill 61169 00:08:50.044 15:51:01 -- common/autotest_common.sh@960 -- # wait 61169 00:08:51.026 15:51:02 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:51.026 15:51:02 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:51.026 15:51:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:51.026 15:51:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.026 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:08:51.026 ************************************ 00:08:51.026 START TEST bdev_hello_world 00:08:51.026 ************************************ 00:08:51.026 15:51:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:51.310 [2024-11-29 15:51:02.483832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:51.310 [2024-11-29 15:51:02.483943] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61820 ] 00:08:51.310 [2024-11-29 15:51:02.630798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.568 [2024-11-29 15:51:02.772580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.829 [2024-11-29 15:51:03.250563] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:51.829 [2024-11-29 15:51:03.250633] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:51.829 [2024-11-29 15:51:03.250659] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:51.829 [2024-11-29 15:51:03.253817] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:51.829 [2024-11-29 15:51:03.254631] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:51.829 [2024-11-29 15:51:03.254673] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:51.829 [2024-11-29 15:51:03.255336] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:51.829 00:08:51.829 [2024-11-29 15:51:03.255372] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:53.216 00:08:53.216 real 0m1.841s 00:08:53.216 user 0m1.570s 00:08:53.216 sys 0m0.158s 00:08:53.216 15:51:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.216 15:51:04 -- common/autotest_common.sh@10 -- # set +x 00:08:53.216 ************************************ 00:08:53.216 END TEST bdev_hello_world 00:08:53.216 ************************************ 00:08:53.216 15:51:04 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:53.216 15:51:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:53.216 15:51:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.216 15:51:04 -- common/autotest_common.sh@10 -- # set +x 00:08:53.216 ************************************ 00:08:53.216 START TEST bdev_bounds 00:08:53.216 ************************************ 00:08:53.216 15:51:04 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:08:53.216 Process bdevio pid: 61858 00:08:53.216 15:51:04 -- bdev/blockdev.sh@288 -- # bdevio_pid=61858 00:08:53.216 15:51:04 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:53.216 15:51:04 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 61858' 00:08:53.216 15:51:04 -- bdev/blockdev.sh@291 -- # waitforlisten 61858 00:08:53.216 15:51:04 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:53.216 15:51:04 -- common/autotest_common.sh@829 -- # '[' -z 61858 ']' 00:08:53.216 15:51:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.216 15:51:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.217 15:51:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.217 15:51:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.217 15:51:04 -- common/autotest_common.sh@10 -- # set +x 00:08:53.217 [2024-11-29 15:51:04.398702] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.217 [2024-11-29 15:51:04.398825] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61858 ] 00:08:53.217 [2024-11-29 15:51:04.544847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:53.478 [2024-11-29 15:51:04.746682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.478 [2024-11-29 15:51:04.746907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.478 [2024-11-29 15:51:04.746916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.865 15:51:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.865 15:51:05 -- common/autotest_common.sh@862 -- # return 0 00:08:54.865 15:51:05 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:54.865 I/O targets: 00:08:54.865 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:54.865 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:54.865 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:54.865 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:54.865 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:54.865 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:54.865 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:54.865 00:08:54.865 00:08:54.865 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.865 http://cunit.sourceforge.net/ 00:08:54.865 00:08:54.865 00:08:54.865 Suite: bdevio tests on: Nvme3n1 00:08:54.865 Test: blockdev write read block ...passed 00:08:54.865 Test: blockdev write zeroes read block ...passed 00:08:54.865 Test: blockdev write zeroes read no split ...passed 00:08:54.865 Test: blockdev write zeroes read split ...passed 00:08:54.865 Test: blockdev write zeroes read split partial ...passed 00:08:54.865 Test: blockdev reset ...[2024-11-29 15:51:06.069833] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:54.865 [2024-11-29 15:51:06.073785] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:54.865 passed 00:08:54.865 Test: blockdev write read 8 blocks ...passed 00:08:54.865 Test: blockdev write read size > 128k ...passed 00:08:54.865 Test: blockdev write read invalid size ...passed 00:08:54.865 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:54.865 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:54.865 Test: blockdev write read max offset ...passed 00:08:54.865 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:54.865 Test: blockdev writev readv 8 blocks ...passed 00:08:54.865 Test: blockdev writev readv 30 x 1block ...passed 00:08:54.865 Test: blockdev writev readv block ...passed 00:08:54.865 Test: blockdev writev readv size > 128k ...passed 00:08:54.865 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:54.866 Test: blockdev comparev and writev ...[2024-11-29 15:51:06.093785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26ce0a000 len:0x1000 00:08:54.866 [2024-11-29 15:51:06.094199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:54.866 passed 00:08:54.866 Test: blockdev nvme passthru rw ...passed 00:08:54.866 Test: blockdev nvme passthru vendor specific ...[2024-11-29 15:51:06.097774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:08:54.866 Test: blockdev nvme admin passthru ...RP2 0x0 00:08:54.866 [2024-11-29 15:51:06.098149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:54.866 passed 00:08:54.866 Test: blockdev copy ...passed 00:08:54.866 Suite: bdevio tests on: Nvme2n3 00:08:54.866 Test: blockdev write read block ...passed 00:08:54.866 Test: blockdev write zeroes read block ...passed 00:08:54.866 Test: blockdev write zeroes read no split ...passed 00:08:54.866 Test: blockdev write zeroes read split ...passed 00:08:54.866 Test: blockdev write zeroes read split partial ...passed 00:08:54.866 Test: blockdev reset ...[2024-11-29 15:51:06.153284] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:54.866 [2024-11-29 15:51:06.157110] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:54.866 passed 00:08:54.866 Test: blockdev write read 8 blocks ...passed 00:08:54.866 Test: blockdev write read size > 128k ...passed 00:08:54.866 Test: blockdev write read invalid size ...passed 00:08:54.866 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:54.866 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:54.866 Test: blockdev write read max offset ...passed 00:08:54.866 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:54.866 Test: blockdev writev readv 8 blocks ...passed 00:08:54.866 Test: blockdev writev readv 30 x 1block ...passed 00:08:54.866 Test: blockdev writev readv block ...passed 00:08:54.866 Test: blockdev writev readv size > 128k ...passed 00:08:54.866 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:54.866 Test: blockdev comparev and writev ...[2024-11-29 15:51:06.176281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x267704000 len:0x1000 00:08:54.866 [2024-11-29 15:51:06.176321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:54.866 passed 00:08:54.866 Test: blockdev nvme passthru rw ...passed 00:08:54.866 Test: blockdev nvme passthru vendor specific ...passed 00:08:54.866 Test: blockdev nvme admin passthru ...[2024-11-29 15:51:06.178811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:54.866 [2024-11-29 15:51:06.178843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:54.866 passed 00:08:54.866 Test: blockdev copy ...passed 00:08:54.866 Suite: bdevio tests on: Nvme2n2 00:08:54.866 Test: blockdev write read block ...passed 00:08:54.866 Test: blockdev write zeroes read block ...passed 00:08:54.866 Test: blockdev write zeroes read no split ...passed 00:08:54.866 Test: blockdev write zeroes read split ...passed 00:08:54.866 Test: blockdev write zeroes read split partial ...passed 00:08:54.866 Test: blockdev reset ...[2024-11-29 15:51:06.235032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:54.866 [2024-11-29 15:51:06.238758] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:54.866 passed 00:08:54.866 Test: blockdev write read 8 blocks ...passed 00:08:54.866 Test: blockdev write read size > 128k ...passed 00:08:54.866 Test: blockdev write read invalid size ...passed 00:08:54.866 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:54.866 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:54.866 Test: blockdev write read max offset ...passed 00:08:54.866 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:54.866 Test: blockdev writev readv 8 blocks ...passed 00:08:54.866 Test: blockdev writev readv 30 x 1block ...passed 00:08:54.866 Test: blockdev writev readv block ...passed 00:08:54.866 Test: blockdev writev readv size > 128k ...passed 00:08:54.866 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:54.866 Test: blockdev comparev and writev ...[2024-11-29 15:51:06.258066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x267704000 len:0x1000 00:08:54.866 [2024-11-29 15:51:06.258101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:54.866 passed 00:08:54.866 Test: blockdev nvme passthru rw ...passed 00:08:54.866 Test: blockdev nvme passthru vendor specific ...passed 00:08:54.866 Test: blockdev nvme admin passthru ...[2024-11-29 15:51:06.260565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:54.866 [2024-11-29 15:51:06.260595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:54.866 passed 00:08:54.866 Test: blockdev copy ...passed 00:08:54.866 Suite: bdevio tests on: Nvme2n1 00:08:54.866 Test: blockdev write read block ...passed 00:08:54.866 Test: blockdev write zeroes read block ...passed 00:08:54.866 Test: blockdev write zeroes read no split ...passed 00:08:55.128 Test: blockdev write zeroes read split ...passed 00:08:55.128 Test: blockdev write zeroes read split partial ...passed 00:08:55.128 Test: blockdev reset ...[2024-11-29 15:51:06.317021] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:55.128 [2024-11-29 15:51:06.320757] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.128 passed 00:08:55.128 Test: blockdev write read 8 blocks ...passed 00:08:55.128 Test: blockdev write read size > 128k ...passed 00:08:55.128 Test: blockdev write read invalid size ...passed 00:08:55.128 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.128 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.128 Test: blockdev write read max offset ...passed 00:08:55.128 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.128 Test: blockdev writev readv 8 blocks ...passed 00:08:55.128 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.128 Test: blockdev writev readv block ...passed 00:08:55.128 Test: blockdev writev readv size > 128k ...passed 00:08:55.128 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.128 Test: blockdev comparev and writev ...[2024-11-29 15:51:06.337748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28803c000 len:0x1000 00:08:55.128 [2024-11-29 15:51:06.337783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.129 passed 00:08:55.129 Test: blockdev nvme passthru rw ...passed 00:08:55.129 Test: blockdev nvme passthru vendor specific ...passed 00:08:55.129 Test: blockdev nvme admin passthru ...[2024-11-29 15:51:06.340412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.129 [2024-11-29 15:51:06.340442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.129 passed 00:08:55.129 Test: blockdev copy ...passed 00:08:55.129 Suite: bdevio tests on: Nvme1n1 00:08:55.129 Test: blockdev write read block ...passed 00:08:55.129 Test: blockdev write zeroes read block ...passed 00:08:55.129 Test: blockdev write zeroes read no split ...passed 00:08:55.129 Test: blockdev write zeroes read split ...passed 00:08:55.129 Test: blockdev write zeroes read split partial ...passed 00:08:55.129 Test: blockdev reset ...[2024-11-29 15:51:06.397886] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:55.129 [2024-11-29 15:51:06.402182] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.129 passed 00:08:55.129 Test: blockdev write read 8 blocks ...passed 00:08:55.129 Test: blockdev write read size > 128k ...passed 00:08:55.129 Test: blockdev write read invalid size ...passed 00:08:55.129 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.129 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.129 Test: blockdev write read max offset ...passed 00:08:55.129 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.129 Test: blockdev writev readv 8 blocks ...passed 00:08:55.129 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.129 Test: blockdev writev readv block ...passed 00:08:55.129 Test: blockdev writev readv size > 128k ...passed 00:08:55.129 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.129 Test: blockdev comparev and writev ...[2024-11-29 15:51:06.420538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x288038000 len:0x1000 00:08:55.129 [2024-11-29 15:51:06.420572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.129 passed 00:08:55.129 Test: blockdev nvme passthru rw ...passed 00:08:55.129 Test: blockdev nvme passthru vendor specific ...[2024-11-29 15:51:06.422460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:08:55.129 Test: blockdev nvme admin passthru ...RP2 0x0 00:08:55.129 [2024-11-29 15:51:06.422557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.129 passed 00:08:55.129 Test: blockdev copy ...passed 00:08:55.129 Suite: bdevio tests on: Nvme0n1p2 00:08:55.129 Test: blockdev write read block ...passed 00:08:55.129 Test: blockdev write zeroes read block ...passed 00:08:55.129 Test: blockdev write zeroes read no split ...passed 00:08:55.129 Test: blockdev write zeroes read split ...passed 00:08:55.129 Test: blockdev write zeroes read split partial ...passed 00:08:55.129 Test: blockdev reset ...[2024-11-29 15:51:06.482404] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:55.129 [2024-11-29 15:51:06.486046] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.129 passed 00:08:55.129 Test: blockdev write read 8 blocks ...passed 00:08:55.129 Test: blockdev write read size > 128k ...passed 00:08:55.129 Test: blockdev write read invalid size ...passed 00:08:55.129 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.129 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.129 Test: blockdev write read max offset ...passed 00:08:55.129 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.129 Test: blockdev writev readv 8 blocks ...passed 00:08:55.129 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.129 Test: blockdev writev readv block ...passed 00:08:55.129 Test: blockdev writev readv size > 128k ...passed 00:08:55.129 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.129 Test: blockdev comparev and writev ...passed 00:08:55.129 Test: blockdev nvme passthru rw ...passed 00:08:55.129 Test: blockdev nvme passthru vendor specific ...passed 00:08:55.129 Test: blockdev nvme admin passthru ...passed 00:08:55.129 Test: blockdev copy ...[2024-11-29 15:51:06.500636] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:55.129 separate metadata which is not supported yet. 00:08:55.129 passed 00:08:55.129 Suite: bdevio tests on: Nvme0n1p1 00:08:55.129 Test: blockdev write read block ...passed 00:08:55.129 Test: blockdev write zeroes read block ...passed 00:08:55.129 Test: blockdev write zeroes read no split ...passed 00:08:55.129 Test: blockdev write zeroes read split ...passed 00:08:55.129 Test: blockdev write zeroes read split partial ...passed 00:08:55.129 Test: blockdev reset ...[2024-11-29 15:51:06.550646] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:55.129 [2024-11-29 15:51:06.555556] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.129 passed 00:08:55.390 Test: blockdev write read 8 blocks ...passed 00:08:55.390 Test: blockdev write read size > 128k ...passed 00:08:55.390 Test: blockdev write read invalid size ...passed 00:08:55.390 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.390 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.390 Test: blockdev write read max offset ...passed 00:08:55.390 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.390 Test: blockdev writev readv 8 blocks ...passed 00:08:55.390 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.390 Test: blockdev writev readv block ...passed 00:08:55.390 Test: blockdev writev readv size > 128k ...passed 00:08:55.390 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.390 Test: blockdev comparev and writev ...passed 00:08:55.390 Test: blockdev nvme passthru rw ...passed 00:08:55.390 Test: blockdev nvme passthru vendor specific ...passed 00:08:55.390 Test: blockdev nvme admin passthru ...passed[2024-11-29 15:51:06.570901] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:55.390 separate metadata which is not supported yet. 00:08:55.390 00:08:55.390 Test: blockdev copy ...passed 00:08:55.390 00:08:55.390 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.390 suites 7 7 n/a 0 0 00:08:55.390 tests 161 161 161 0 0 00:08:55.390 asserts 1006 1006 1006 0 n/a 00:08:55.390 00:08:55.390 Elapsed time = 1.406 seconds 00:08:55.390 0 00:08:55.391 15:51:06 -- bdev/blockdev.sh@293 -- # killprocess 61858 00:08:55.391 15:51:06 -- common/autotest_common.sh@936 -- # '[' -z 61858 ']' 00:08:55.391 15:51:06 -- common/autotest_common.sh@940 -- # kill -0 61858 00:08:55.391 15:51:06 -- common/autotest_common.sh@941 -- # uname 00:08:55.391 15:51:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:55.391 15:51:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61858 00:08:55.391 15:51:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:55.391 15:51:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:55.391 15:51:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61858' 00:08:55.391 killing process with pid 61858 00:08:55.391 15:51:06 -- common/autotest_common.sh@955 -- # kill 61858 00:08:55.391 15:51:06 -- common/autotest_common.sh@960 -- # wait 61858 00:08:55.961 15:51:07 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:55.961 00:08:55.961 real 0m2.978s 00:08:55.961 user 0m7.694s 00:08:55.961 sys 0m0.332s 00:08:55.961 15:51:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.961 15:51:07 -- common/autotest_common.sh@10 -- # set +x 00:08:55.961 ************************************ 00:08:55.961 END TEST bdev_bounds 00:08:55.961 ************************************ 00:08:55.961 15:51:07 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:55.961 15:51:07 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:55.961 15:51:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.961 15:51:07 -- common/autotest_common.sh@10 -- # set +x 00:08:55.961 ************************************ 00:08:55.961 START TEST bdev_nbd 00:08:55.961 ************************************ 00:08:55.961 15:51:07 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:55.961 15:51:07 -- bdev/blockdev.sh@298 -- # uname -s 00:08:55.961 15:51:07 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:55.961 15:51:07 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.961 15:51:07 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:55.961 15:51:07 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:55.961 15:51:07 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:55.961 15:51:07 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:08:55.961 15:51:07 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:55.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:55.962 15:51:07 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:55.962 15:51:07 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:55.962 15:51:07 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:08:55.962 15:51:07 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:55.962 15:51:07 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:55.962 15:51:07 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:55.962 15:51:07 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:55.962 15:51:07 -- bdev/blockdev.sh@316 -- # nbd_pid=61925 00:08:55.962 15:51:07 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:55.962 15:51:07 -- bdev/blockdev.sh@318 -- # waitforlisten 61925 /var/tmp/spdk-nbd.sock 00:08:55.962 15:51:07 -- common/autotest_common.sh@829 -- # '[' -z 61925 ']' 00:08:55.962 15:51:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:55.962 15:51:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:55.962 15:51:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:55.962 15:51:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:55.962 15:51:07 -- common/autotest_common.sh@10 -- # set +x 00:08:55.962 15:51:07 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:56.223 [2024-11-29 15:51:07.429123] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:56.223 [2024-11-29 15:51:07.429373] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.223 [2024-11-29 15:51:07.580008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.484 [2024-11-29 15:51:07.759691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.868 15:51:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:57.868 15:51:08 -- common/autotest_common.sh@862 -- # return 0 00:08:57.868 15:51:08 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:57.868 15:51:08 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.868 15:51:08 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:57.868 15:51:08 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:57.868 15:51:08 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:57.868 15:51:08 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.868 15:51:08 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:57.868 15:51:08 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:57.868 15:51:08 -- bdev/nbd_common.sh@24 -- # local i 00:08:57.868 15:51:08 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:57.868 15:51:08 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:57.868 15:51:08 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:57.868 15:51:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:57.868 15:51:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:57.868 15:51:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:57.868 15:51:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:57.868 15:51:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:57.869 15:51:09 -- common/autotest_common.sh@867 -- # local i 00:08:57.869 15:51:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:57.869 15:51:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:57.869 15:51:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:57.869 15:51:09 -- common/autotest_common.sh@871 -- # break 00:08:57.869 15:51:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:57.869 15:51:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:57.869 15:51:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.869 1+0 records in 00:08:57.869 1+0 records out 00:08:57.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476809 s, 8.6 MB/s 00:08:57.869 15:51:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.869 15:51:09 -- common/autotest_common.sh@884 -- # size=4096 00:08:57.869 15:51:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.869 15:51:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:57.869 15:51:09 -- common/autotest_common.sh@887 -- # return 0 00:08:57.869 15:51:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:57.869 15:51:09 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:57.869 15:51:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:58.131 15:51:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:58.131 15:51:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:58.131 15:51:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:58.131 15:51:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:58.131 15:51:09 -- common/autotest_common.sh@867 -- # local i 00:08:58.131 15:51:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:58.131 15:51:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:58.131 15:51:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:58.131 15:51:09 -- common/autotest_common.sh@871 -- # break 00:08:58.131 15:51:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:58.131 15:51:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:58.131 15:51:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.131 1+0 records in 00:08:58.131 1+0 records out 00:08:58.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000957094 s, 4.3 MB/s 00:08:58.131 15:51:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.131 15:51:09 -- common/autotest_common.sh@884 -- # size=4096 00:08:58.131 15:51:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.131 15:51:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:58.131 15:51:09 -- common/autotest_common.sh@887 -- # return 0 00:08:58.131 15:51:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.131 15:51:09 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:58.131 15:51:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:58.131 15:51:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:58.131 15:51:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:58.131 15:51:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:58.131 15:51:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:58.131 15:51:09 -- common/autotest_common.sh@867 -- # local i 00:08:58.131 15:51:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:58.131 15:51:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:58.131 15:51:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:58.131 15:51:09 -- common/autotest_common.sh@871 -- # break 00:08:58.131 15:51:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:58.131 15:51:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:58.131 15:51:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.131 1+0 records in 00:08:58.131 1+0 records out 00:08:58.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526613 s, 7.8 MB/s 00:08:58.131 15:51:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.131 15:51:09 -- common/autotest_common.sh@884 -- # size=4096 00:08:58.131 15:51:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.131 15:51:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:58.131 15:51:09 -- common/autotest_common.sh@887 -- # return 0 00:08:58.131 15:51:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.131 15:51:09 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:58.131 15:51:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:58.392 15:51:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:58.392 15:51:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:58.392 15:51:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:58.392 15:51:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:58.392 15:51:09 -- common/autotest_common.sh@867 -- # local i 00:08:58.392 15:51:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:58.392 15:51:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:58.392 15:51:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:58.392 15:51:09 -- common/autotest_common.sh@871 -- # break 00:08:58.392 15:51:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:58.392 15:51:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:58.392 15:51:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.392 1+0 records in 00:08:58.392 1+0 records out 00:08:58.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575707 s, 7.1 MB/s 00:08:58.392 15:51:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.392 15:51:09 -- common/autotest_common.sh@884 -- # size=4096 00:08:58.392 15:51:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.392 15:51:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:58.392 15:51:09 -- common/autotest_common.sh@887 -- # return 0 00:08:58.392 15:51:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.392 15:51:09 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:58.392 15:51:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:58.654 15:51:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:58.654 15:51:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:58.654 15:51:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:58.655 15:51:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:58.655 15:51:09 -- common/autotest_common.sh@867 -- # local i 00:08:58.655 15:51:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:58.655 15:51:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:58.655 15:51:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:58.655 15:51:09 -- common/autotest_common.sh@871 -- # break 00:08:58.655 15:51:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:58.655 15:51:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:58.655 15:51:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.655 1+0 records in 00:08:58.655 1+0 records out 00:08:58.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000685908 s, 6.0 MB/s 00:08:58.655 15:51:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.655 15:51:09 -- common/autotest_common.sh@884 -- # size=4096 00:08:58.655 15:51:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.655 15:51:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:58.655 15:51:09 -- common/autotest_common.sh@887 -- # return 0 00:08:58.655 15:51:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.655 15:51:09 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:58.655 15:51:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:58.916 15:51:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:58.916 15:51:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:58.916 15:51:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:58.916 15:51:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:58.916 15:51:10 -- common/autotest_common.sh@867 -- # local i 00:08:58.916 15:51:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:58.916 15:51:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:58.916 15:51:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:58.916 15:51:10 -- common/autotest_common.sh@871 -- # break 00:08:58.916 15:51:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:58.916 15:51:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:58.916 15:51:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.916 1+0 records in 00:08:58.916 1+0 records out 00:08:58.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778966 s, 5.3 MB/s 00:08:58.916 15:51:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.916 15:51:10 -- common/autotest_common.sh@884 -- # size=4096 00:08:58.916 15:51:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.916 15:51:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:58.916 15:51:10 -- common/autotest_common.sh@887 -- # return 0 00:08:58.916 15:51:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.916 15:51:10 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:58.916 15:51:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:59.178 15:51:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:59.178 15:51:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:59.178 15:51:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:59.178 15:51:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:59.178 15:51:10 -- common/autotest_common.sh@867 -- # local i 00:08:59.178 15:51:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:59.178 15:51:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:59.178 15:51:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:59.178 15:51:10 -- common/autotest_common.sh@871 -- # break 00:08:59.178 15:51:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:59.178 15:51:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:59.178 15:51:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.178 1+0 records in 00:08:59.178 1+0 records out 00:08:59.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000858234 s, 4.8 MB/s 00:08:59.178 15:51:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.178 15:51:10 -- common/autotest_common.sh@884 -- # size=4096 00:08:59.178 15:51:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.178 15:51:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:59.178 15:51:10 -- common/autotest_common.sh@887 -- # return 0 00:08:59.178 15:51:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:59.178 15:51:10 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:59.178 15:51:10 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:59.178 15:51:10 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:59.178 { 00:08:59.178 "nbd_device": "/dev/nbd0", 00:08:59.178 "bdev_name": "Nvme0n1p1" 00:08:59.178 }, 00:08:59.178 { 00:08:59.178 "nbd_device": "/dev/nbd1", 00:08:59.178 "bdev_name": "Nvme0n1p2" 00:08:59.178 }, 00:08:59.178 { 00:08:59.178 "nbd_device": "/dev/nbd2", 00:08:59.178 "bdev_name": "Nvme1n1" 00:08:59.178 }, 00:08:59.178 { 00:08:59.178 "nbd_device": "/dev/nbd3", 00:08:59.178 "bdev_name": "Nvme2n1" 00:08:59.178 }, 00:08:59.178 { 00:08:59.178 "nbd_device": "/dev/nbd4", 00:08:59.178 "bdev_name": "Nvme2n2" 00:08:59.178 }, 00:08:59.178 { 00:08:59.178 "nbd_device": "/dev/nbd5", 00:08:59.178 "bdev_name": "Nvme2n3" 00:08:59.178 }, 00:08:59.178 { 00:08:59.178 "nbd_device": "/dev/nbd6", 00:08:59.178 "bdev_name": "Nvme3n1" 00:08:59.178 } 00:08:59.178 ]' 00:08:59.178 15:51:10 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:59.178 15:51:10 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:59.178 { 00:08:59.178 "nbd_device": "/dev/nbd0", 00:08:59.178 "bdev_name": "Nvme0n1p1" 00:08:59.178 }, 00:08:59.178 { 00:08:59.178 "nbd_device": "/dev/nbd1", 00:08:59.178 "bdev_name": "Nvme0n1p2" 00:08:59.178 }, 00:08:59.178 { 00:08:59.178 "nbd_device": "/dev/nbd2", 00:08:59.178 "bdev_name": "Nvme1n1" 00:08:59.178 }, 00:08:59.178 { 00:08:59.178 "nbd_device": "/dev/nbd3", 00:08:59.178 "bdev_name": "Nvme2n1" 00:08:59.178 }, 00:08:59.178 { 00:08:59.178 "nbd_device": "/dev/nbd4", 00:08:59.178 "bdev_name": "Nvme2n2" 00:08:59.178 }, 00:08:59.178 { 00:08:59.178 "nbd_device": "/dev/nbd5", 00:08:59.178 "bdev_name": "Nvme2n3" 00:08:59.178 }, 00:08:59.178 { 00:08:59.178 "nbd_device": "/dev/nbd6", 00:08:59.178 "bdev_name": "Nvme3n1" 00:08:59.178 } 00:08:59.178 ]' 00:08:59.178 15:51:10 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:59.439 15:51:10 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:59.439 15:51:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.439 15:51:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:59.439 15:51:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:59.439 15:51:10 -- bdev/nbd_common.sh@51 -- # local i 00:08:59.439 15:51:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.439 15:51:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:59.439 15:51:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:59.439 15:51:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:59.439 15:51:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:59.439 15:51:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.439 15:51:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.440 15:51:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:59.440 15:51:10 -- bdev/nbd_common.sh@41 -- # break 00:08:59.440 15:51:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.440 15:51:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.440 15:51:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:59.700 15:51:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:59.700 15:51:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:59.700 15:51:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:59.700 15:51:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.700 15:51:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.700 15:51:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:59.700 15:51:10 -- bdev/nbd_common.sh@41 -- # break 00:08:59.700 15:51:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.700 15:51:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.700 15:51:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@41 -- # break 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@41 -- # break 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.962 15:51:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:00.251 15:51:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:00.251 15:51:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:00.251 15:51:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:00.251 15:51:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.251 15:51:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.251 15:51:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:00.251 15:51:11 -- bdev/nbd_common.sh@41 -- # break 00:09:00.251 15:51:11 -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.251 15:51:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.251 15:51:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:00.510 15:51:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:00.510 15:51:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:00.510 15:51:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:00.510 15:51:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.510 15:51:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.510 15:51:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:00.510 15:51:11 -- bdev/nbd_common.sh@41 -- # break 00:09:00.510 15:51:11 -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.510 15:51:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.510 15:51:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:00.768 15:51:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:00.768 15:51:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:00.768 15:51:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:00.768 15:51:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.768 15:51:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.768 15:51:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:00.768 15:51:11 -- bdev/nbd_common.sh@41 -- # break 00:09:00.768 15:51:11 -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.768 15:51:11 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:00.768 15:51:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.768 15:51:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:00.768 15:51:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:00.768 15:51:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:00.768 15:51:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:00.768 15:51:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:00.768 15:51:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:00.768 15:51:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:00.768 15:51:12 -- bdev/nbd_common.sh@65 -- # true 00:09:00.768 15:51:12 -- bdev/nbd_common.sh@65 -- # count=0 00:09:00.768 15:51:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@122 -- # count=0 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@127 -- # return 0 00:09:01.027 15:51:12 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@12 -- # local i 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:01.027 /dev/nbd0 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:01.027 15:51:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:01.027 15:51:12 -- common/autotest_common.sh@867 -- # local i 00:09:01.027 15:51:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:01.027 15:51:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:01.027 15:51:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:01.027 15:51:12 -- common/autotest_common.sh@871 -- # break 00:09:01.027 15:51:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:01.027 15:51:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:01.027 15:51:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.027 1+0 records in 00:09:01.027 1+0 records out 00:09:01.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364634 s, 11.2 MB/s 00:09:01.027 15:51:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.027 15:51:12 -- common/autotest_common.sh@884 -- # size=4096 00:09:01.027 15:51:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.027 15:51:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:01.027 15:51:12 -- common/autotest_common.sh@887 -- # return 0 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:01.027 15:51:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:01.286 /dev/nbd1 00:09:01.286 15:51:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:01.286 15:51:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:01.286 15:51:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:01.286 15:51:12 -- common/autotest_common.sh@867 -- # local i 00:09:01.286 15:51:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:01.286 15:51:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:01.286 15:51:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:01.286 15:51:12 -- common/autotest_common.sh@871 -- # break 00:09:01.286 15:51:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:01.286 15:51:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:01.286 15:51:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.286 1+0 records in 00:09:01.286 1+0 records out 00:09:01.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244192 s, 16.8 MB/s 00:09:01.286 15:51:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.286 15:51:12 -- common/autotest_common.sh@884 -- # size=4096 00:09:01.286 15:51:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.286 15:51:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:01.286 15:51:12 -- common/autotest_common.sh@887 -- # return 0 00:09:01.286 15:51:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.286 15:51:12 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:01.286 15:51:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:01.546 /dev/nbd10 00:09:01.546 15:51:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:01.546 15:51:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:01.546 15:51:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:01.546 15:51:12 -- common/autotest_common.sh@867 -- # local i 00:09:01.546 15:51:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:01.546 15:51:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:01.546 15:51:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:01.546 15:51:12 -- common/autotest_common.sh@871 -- # break 00:09:01.546 15:51:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:01.546 15:51:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:01.546 15:51:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.546 1+0 records in 00:09:01.546 1+0 records out 00:09:01.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108954 s, 3.8 MB/s 00:09:01.546 15:51:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.546 15:51:12 -- common/autotest_common.sh@884 -- # size=4096 00:09:01.546 15:51:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.546 15:51:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:01.546 15:51:12 -- common/autotest_common.sh@887 -- # return 0 00:09:01.546 15:51:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.546 15:51:12 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:01.546 15:51:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:01.808 /dev/nbd11 00:09:01.808 15:51:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:01.808 15:51:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:01.808 15:51:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:01.808 15:51:13 -- common/autotest_common.sh@867 -- # local i 00:09:01.808 15:51:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:01.808 15:51:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:01.808 15:51:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:01.808 15:51:13 -- common/autotest_common.sh@871 -- # break 00:09:01.808 15:51:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:01.808 15:51:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:01.808 15:51:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.808 1+0 records in 00:09:01.808 1+0 records out 00:09:01.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000986726 s, 4.2 MB/s 00:09:01.808 15:51:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.808 15:51:13 -- common/autotest_common.sh@884 -- # size=4096 00:09:01.808 15:51:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.808 15:51:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:01.808 15:51:13 -- common/autotest_common.sh@887 -- # return 0 00:09:01.808 15:51:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.808 15:51:13 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:01.808 15:51:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:02.070 /dev/nbd12 00:09:02.070 15:51:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:02.070 15:51:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:02.070 15:51:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:02.070 15:51:13 -- common/autotest_common.sh@867 -- # local i 00:09:02.070 15:51:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:02.070 15:51:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:02.070 15:51:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:02.070 15:51:13 -- common/autotest_common.sh@871 -- # break 00:09:02.070 15:51:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:02.070 15:51:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:02.070 15:51:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.070 1+0 records in 00:09:02.070 1+0 records out 00:09:02.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114913 s, 3.6 MB/s 00:09:02.070 15:51:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.070 15:51:13 -- common/autotest_common.sh@884 -- # size=4096 00:09:02.070 15:51:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.070 15:51:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:02.070 15:51:13 -- common/autotest_common.sh@887 -- # return 0 00:09:02.070 15:51:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.070 15:51:13 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:02.070 15:51:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:02.070 /dev/nbd13 00:09:02.070 15:51:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:02.070 15:51:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:02.070 15:51:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:02.070 15:51:13 -- common/autotest_common.sh@867 -- # local i 00:09:02.070 15:51:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:02.070 15:51:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:02.070 15:51:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:02.070 15:51:13 -- common/autotest_common.sh@871 -- # break 00:09:02.070 15:51:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:02.070 15:51:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:02.070 15:51:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.070 1+0 records in 00:09:02.070 1+0 records out 00:09:02.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105944 s, 3.9 MB/s 00:09:02.070 15:51:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.070 15:51:13 -- common/autotest_common.sh@884 -- # size=4096 00:09:02.070 15:51:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.070 15:51:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:02.070 15:51:13 -- common/autotest_common.sh@887 -- # return 0 00:09:02.070 15:51:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.070 15:51:13 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:02.070 15:51:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:02.332 /dev/nbd14 00:09:02.332 15:51:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:02.332 15:51:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:02.332 15:51:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:09:02.332 15:51:13 -- common/autotest_common.sh@867 -- # local i 00:09:02.332 15:51:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:02.332 15:51:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:02.332 15:51:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:09:02.332 15:51:13 -- common/autotest_common.sh@871 -- # break 00:09:02.332 15:51:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:02.332 15:51:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:02.332 15:51:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.332 1+0 records in 00:09:02.332 1+0 records out 00:09:02.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011745 s, 3.5 MB/s 00:09:02.332 15:51:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.332 15:51:13 -- common/autotest_common.sh@884 -- # size=4096 00:09:02.332 15:51:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.332 15:51:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:02.332 15:51:13 -- common/autotest_common.sh@887 -- # return 0 00:09:02.332 15:51:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.332 15:51:13 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:02.332 15:51:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:02.332 15:51:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.332 15:51:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:02.593 15:51:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:02.594 { 00:09:02.594 "nbd_device": "/dev/nbd0", 00:09:02.594 "bdev_name": "Nvme0n1p1" 00:09:02.594 }, 00:09:02.594 { 00:09:02.594 "nbd_device": "/dev/nbd1", 00:09:02.594 "bdev_name": "Nvme0n1p2" 00:09:02.594 }, 00:09:02.594 { 00:09:02.594 "nbd_device": "/dev/nbd10", 00:09:02.594 "bdev_name": "Nvme1n1" 00:09:02.594 }, 00:09:02.594 { 00:09:02.594 "nbd_device": "/dev/nbd11", 00:09:02.594 "bdev_name": "Nvme2n1" 00:09:02.594 }, 00:09:02.594 { 00:09:02.594 "nbd_device": "/dev/nbd12", 00:09:02.594 "bdev_name": "Nvme2n2" 00:09:02.594 }, 00:09:02.594 { 00:09:02.594 "nbd_device": "/dev/nbd13", 00:09:02.594 "bdev_name": "Nvme2n3" 00:09:02.594 }, 00:09:02.594 { 00:09:02.594 "nbd_device": "/dev/nbd14", 00:09:02.594 "bdev_name": "Nvme3n1" 00:09:02.594 } 00:09:02.594 ]' 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:02.594 { 00:09:02.594 "nbd_device": "/dev/nbd0", 00:09:02.594 "bdev_name": "Nvme0n1p1" 00:09:02.594 }, 00:09:02.594 { 00:09:02.594 "nbd_device": "/dev/nbd1", 00:09:02.594 "bdev_name": "Nvme0n1p2" 00:09:02.594 }, 00:09:02.594 { 00:09:02.594 "nbd_device": "/dev/nbd10", 00:09:02.594 "bdev_name": "Nvme1n1" 00:09:02.594 }, 00:09:02.594 { 00:09:02.594 "nbd_device": "/dev/nbd11", 00:09:02.594 "bdev_name": "Nvme2n1" 00:09:02.594 }, 00:09:02.594 { 00:09:02.594 "nbd_device": "/dev/nbd12", 00:09:02.594 "bdev_name": "Nvme2n2" 00:09:02.594 }, 00:09:02.594 { 00:09:02.594 "nbd_device": "/dev/nbd13", 00:09:02.594 "bdev_name": "Nvme2n3" 00:09:02.594 }, 00:09:02.594 { 00:09:02.594 "nbd_device": "/dev/nbd14", 00:09:02.594 "bdev_name": "Nvme3n1" 00:09:02.594 } 00:09:02.594 ]' 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:02.594 /dev/nbd1 00:09:02.594 /dev/nbd10 00:09:02.594 /dev/nbd11 00:09:02.594 /dev/nbd12 00:09:02.594 /dev/nbd13 00:09:02.594 /dev/nbd14' 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:02.594 /dev/nbd1 00:09:02.594 /dev/nbd10 00:09:02.594 /dev/nbd11 00:09:02.594 /dev/nbd12 00:09:02.594 /dev/nbd13 00:09:02.594 /dev/nbd14' 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@65 -- # count=7 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@66 -- # echo 7 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@95 -- # count=7 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:02.594 256+0 records in 00:09:02.594 256+0 records out 00:09:02.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00635564 s, 165 MB/s 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:02.594 15:51:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:02.855 256+0 records in 00:09:02.855 256+0 records out 00:09:02.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190143 s, 5.5 MB/s 00:09:02.855 15:51:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:02.855 15:51:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:03.117 256+0 records in 00:09:03.117 256+0 records out 00:09:03.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.229358 s, 4.6 MB/s 00:09:03.117 15:51:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.117 15:51:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:03.379 256+0 records in 00:09:03.379 256+0 records out 00:09:03.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.19598 s, 5.4 MB/s 00:09:03.379 15:51:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.379 15:51:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:03.379 256+0 records in 00:09:03.379 256+0 records out 00:09:03.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115019 s, 9.1 MB/s 00:09:03.379 15:51:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.379 15:51:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:03.641 256+0 records in 00:09:03.641 256+0 records out 00:09:03.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.110037 s, 9.5 MB/s 00:09:03.641 15:51:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.641 15:51:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:03.641 256+0 records in 00:09:03.641 256+0 records out 00:09:03.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224755 s, 4.7 MB/s 00:09:03.641 15:51:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.641 15:51:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:03.901 256+0 records in 00:09:03.901 256+0 records out 00:09:03.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190646 s, 5.5 MB/s 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:03.901 15:51:15 -- bdev/nbd_common.sh@51 -- # local i 00:09:03.902 15:51:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.902 15:51:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:04.161 15:51:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:04.161 15:51:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:04.161 15:51:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:04.161 15:51:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.161 15:51:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.161 15:51:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:04.161 15:51:15 -- bdev/nbd_common.sh@41 -- # break 00:09:04.161 15:51:15 -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.161 15:51:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.161 15:51:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:04.420 15:51:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:04.420 15:51:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:04.420 15:51:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:04.420 15:51:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.420 15:51:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.420 15:51:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:04.420 15:51:15 -- bdev/nbd_common.sh@41 -- # break 00:09:04.420 15:51:15 -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.420 15:51:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.420 15:51:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:04.681 15:51:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:04.681 15:51:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:04.681 15:51:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:04.681 15:51:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.681 15:51:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.681 15:51:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:04.681 15:51:15 -- bdev/nbd_common.sh@41 -- # break 00:09:04.681 15:51:15 -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.681 15:51:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.681 15:51:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:04.681 15:51:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:04.681 15:51:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:04.681 15:51:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:04.681 15:51:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.681 15:51:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.681 15:51:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:04.681 15:51:16 -- bdev/nbd_common.sh@41 -- # break 00:09:04.681 15:51:16 -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.681 15:51:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.681 15:51:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:04.942 15:51:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:04.942 15:51:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:04.942 15:51:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:04.942 15:51:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.942 15:51:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.942 15:51:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:04.942 15:51:16 -- bdev/nbd_common.sh@41 -- # break 00:09:04.942 15:51:16 -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.942 15:51:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.942 15:51:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:05.203 15:51:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:05.203 15:51:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@41 -- # break 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@41 -- # break 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.204 15:51:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@65 -- # true 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@65 -- # count=0 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@104 -- # count=0 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@109 -- # return 0 00:09:05.466 15:51:16 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:05.466 15:51:16 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:05.726 malloc_lvol_verify 00:09:05.726 15:51:17 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:05.985 f44bba12-cbc1-4c55-88a4-3f921347279f 00:09:05.985 15:51:17 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:05.985 7ab14318-f493-4c70-b8d5-8df626e9883d 00:09:05.985 15:51:17 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:06.244 /dev/nbd0 00:09:06.244 15:51:17 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:06.244 mke2fs 1.47.0 (5-Feb-2023) 00:09:06.244 Discarding device blocks: 0/4096 done 00:09:06.244 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:06.244 00:09:06.244 Allocating group tables: 0/1 done 00:09:06.244 Writing inode tables: 0/1 done 00:09:06.244 Creating journal (1024 blocks): done 00:09:06.244 Writing superblocks and filesystem accounting information: 0/1 done 00:09:06.244 00:09:06.244 15:51:17 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:06.244 15:51:17 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:06.244 15:51:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.244 15:51:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:06.245 15:51:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:06.245 15:51:17 -- bdev/nbd_common.sh@51 -- # local i 00:09:06.245 15:51:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.245 15:51:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:06.566 15:51:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:06.567 15:51:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:06.567 15:51:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:06.567 15:51:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.567 15:51:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.567 15:51:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:06.567 15:51:17 -- bdev/nbd_common.sh@41 -- # break 00:09:06.567 15:51:17 -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.567 15:51:17 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:06.567 15:51:17 -- bdev/nbd_common.sh@147 -- # return 0 00:09:06.567 15:51:17 -- bdev/blockdev.sh@324 -- # killprocess 61925 00:09:06.567 15:51:17 -- common/autotest_common.sh@936 -- # '[' -z 61925 ']' 00:09:06.567 15:51:17 -- common/autotest_common.sh@940 -- # kill -0 61925 00:09:06.567 15:51:17 -- common/autotest_common.sh@941 -- # uname 00:09:06.567 15:51:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:06.567 15:51:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61925 00:09:06.567 15:51:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:06.567 killing process with pid 61925 00:09:06.567 15:51:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:06.567 15:51:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61925' 00:09:06.567 15:51:17 -- common/autotest_common.sh@955 -- # kill 61925 00:09:06.567 15:51:17 -- common/autotest_common.sh@960 -- # wait 61925 00:09:07.148 15:51:18 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:09:07.148 00:09:07.148 real 0m11.144s 00:09:07.148 user 0m15.296s 00:09:07.148 sys 0m3.426s 00:09:07.148 15:51:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.148 15:51:18 -- common/autotest_common.sh@10 -- # set +x 00:09:07.148 ************************************ 00:09:07.148 END TEST bdev_nbd 00:09:07.148 ************************************ 00:09:07.148 15:51:18 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:09:07.148 15:51:18 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:09:07.148 skipping fio tests on NVMe due to multi-ns failures. 00:09:07.148 15:51:18 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:09:07.148 15:51:18 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:07.148 15:51:18 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:07.148 15:51:18 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:07.148 15:51:18 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:07.148 15:51:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.148 15:51:18 -- common/autotest_common.sh@10 -- # set +x 00:09:07.148 ************************************ 00:09:07.148 START TEST bdev_verify 00:09:07.148 ************************************ 00:09:07.148 15:51:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:07.407 [2024-11-29 15:51:18.613101] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:07.407 [2024-11-29 15:51:18.613209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62344 ] 00:09:07.407 [2024-11-29 15:51:18.760360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:07.666 [2024-11-29 15:51:18.899522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.666 [2024-11-29 15:51:18.899606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.232 Running I/O for 5 seconds... 00:09:13.503 00:09:13.503 Latency(us) 00:09:13.503 [2024-11-29T15:51:24.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.503 [2024-11-29T15:51:24.934Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:13.503 Verification LBA range: start 0x0 length 0x5e800 00:09:13.503 Nvme0n1p1 : 5.04 2893.21 11.30 0.00 0.00 44152.33 4990.82 74206.92 00:09:13.503 [2024-11-29T15:51:24.934Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:13.503 Verification LBA range: start 0x5e800 length 0x5e800 00:09:13.503 Nvme0n1p1 : 5.05 2811.42 10.98 0.00 0.00 45268.22 2344.17 70980.53 00:09:13.503 [2024-11-29T15:51:24.934Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:13.503 Verification LBA range: start 0x0 length 0x5e7ff 00:09:13.503 Nvme0n1p2 : 5.04 2894.32 11.31 0.00 0.00 44111.72 4965.61 71787.13 00:09:13.503 [2024-11-29T15:51:24.934Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:13.503 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:13.503 Nvme0n1p2 : 5.05 2810.70 10.98 0.00 0.00 45225.16 2860.90 87919.06 00:09:13.503 [2024-11-29T15:51:24.934Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:13.503 Verification LBA range: start 0x0 length 0xa0000 00:09:13.503 Nvme1n1 : 5.04 2893.45 11.30 0.00 0.00 44095.90 5898.24 50815.61 00:09:13.503 [2024-11-29T15:51:24.934Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:13.503 Verification LBA range: start 0xa0000 length 0xa0000 00:09:13.503 Nvme1n1 : 5.06 2810.06 10.98 0.00 0.00 45186.13 3327.21 57268.38 00:09:13.503 [2024-11-29T15:51:24.934Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:13.503 Verification LBA range: start 0x0 length 0x80000 00:09:13.503 Nvme2n1 : 5.04 2892.30 11.30 0.00 0.00 44020.77 7158.55 50412.31 00:09:13.503 [2024-11-29T15:51:24.934Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:13.503 Verification LBA range: start 0x80000 length 0x80000 00:09:13.503 Nvme2n1 : 5.06 2816.71 11.00 0.00 0.00 45081.11 1814.84 58478.28 00:09:13.503 [2024-11-29T15:51:24.934Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:13.503 Verification LBA range: start 0x0 length 0x80000 00:09:13.503 Nvme2n2 : 5.05 2890.83 11.29 0.00 0.00 44005.36 9074.22 52832.10 00:09:13.503 [2024-11-29T15:51:24.934Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:13.503 Verification LBA range: start 0x80000 length 0x80000 00:09:13.503 Nvme2n2 : 5.04 2809.52 10.97 0.00 0.00 45451.31 6200.71 59688.17 00:09:13.503 [2024-11-29T15:51:24.934Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:13.503 Verification LBA range: start 0x0 length 0x80000 00:09:13.503 Nvme2n3 : 5.05 2889.09 11.29 0.00 0.00 43988.85 11443.59 52025.50 00:09:13.503 [2024-11-29T15:51:24.934Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:13.503 Verification LBA range: start 0x80000 length 0x80000 00:09:13.503 Nvme2n3 : 5.04 2808.00 10.97 0.00 0.00 45408.34 7965.14 54848.59 00:09:13.503 [2024-11-29T15:51:24.934Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:13.503 Verification LBA range: start 0x0 length 0x20000 00:09:13.503 Nvme3n1 : 5.05 2895.81 11.31 0.00 0.00 43893.14 630.15 52428.80 00:09:13.503 [2024-11-29T15:51:24.934Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:13.503 Verification LBA range: start 0x20000 length 0x20000 00:09:13.503 Nvme3n1 : 5.05 2806.49 10.96 0.00 0.00 45357.30 9779.99 54848.59 00:09:13.503 [2024-11-29T15:51:24.934Z] =================================================================================================================== 00:09:13.503 [2024-11-29T15:51:24.934Z] Total : 39921.89 155.94 0.00 0.00 44651.60 630.15 87919.06 00:09:16.047 00:09:16.047 real 0m8.588s 00:09:16.047 user 0m16.147s 00:09:16.047 sys 0m0.211s 00:09:16.047 15:51:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:16.047 ************************************ 00:09:16.047 END TEST bdev_verify 00:09:16.047 ************************************ 00:09:16.047 15:51:27 -- common/autotest_common.sh@10 -- # set +x 00:09:16.047 15:51:27 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:16.047 15:51:27 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:16.047 15:51:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:16.047 15:51:27 -- common/autotest_common.sh@10 -- # set +x 00:09:16.047 ************************************ 00:09:16.047 START TEST bdev_verify_big_io 00:09:16.047 ************************************ 00:09:16.047 15:51:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:16.047 [2024-11-29 15:51:27.243071] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:16.047 [2024-11-29 15:51:27.243184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62456 ] 00:09:16.047 [2024-11-29 15:51:27.391269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:16.308 [2024-11-29 15:51:27.566182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.308 [2024-11-29 15:51:27.566253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.880 Running I/O for 5 seconds... 00:09:23.452 00:09:23.452 Latency(us) 00:09:23.452 [2024-11-29T15:51:34.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.452 [2024-11-29T15:51:34.883Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.452 Verification LBA range: start 0x0 length 0x5e80 00:09:23.452 Nvme0n1p1 : 5.37 233.53 14.60 0.00 0.00 533112.93 83886.08 822728.86 00:09:23.452 [2024-11-29T15:51:34.883Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.452 Verification LBA range: start 0x5e80 length 0x5e80 00:09:23.452 Nvme0n1p1 : 5.37 218.84 13.68 0.00 0.00 571717.10 52428.80 884030.23 00:09:23.452 [2024-11-29T15:51:34.883Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.452 Verification LBA range: start 0x0 length 0x5e7f 00:09:23.452 Nvme0n1p2 : 5.37 233.47 14.59 0.00 0.00 524589.59 84289.38 738842.78 00:09:23.452 [2024-11-29T15:51:34.883Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.452 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:23.452 Nvme0n1p2 : 5.44 223.27 13.95 0.00 0.00 552953.09 65737.65 800144.15 00:09:23.452 [2024-11-29T15:51:34.883Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.452 Verification LBA range: start 0x0 length 0xa000 00:09:23.452 Nvme1n1 : 5.41 239.65 14.98 0.00 0.00 507213.11 43354.58 674315.03 00:09:23.452 [2024-11-29T15:51:34.883Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.452 Verification LBA range: start 0xa000 length 0xa000 00:09:23.452 Nvme1n1 : 5.44 223.21 13.95 0.00 0.00 543079.25 66544.25 722710.84 00:09:23.452 [2024-11-29T15:51:34.883Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.452 Verification LBA range: start 0x0 length 0x8000 00:09:23.452 Nvme2n1 : 5.44 247.93 15.50 0.00 0.00 486542.60 27021.00 622692.82 00:09:23.452 [2024-11-29T15:51:34.883Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.452 Verification LBA range: start 0x8000 length 0x8000 00:09:23.452 Nvme2n1 : 5.49 228.26 14.27 0.00 0.00 522580.66 48597.46 651730.31 00:09:23.452 [2024-11-29T15:51:34.883Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.452 Verification LBA range: start 0x0 length 0x8000 00:09:23.452 Nvme2n2 : 5.46 254.54 15.91 0.00 0.00 467644.43 21979.77 603334.50 00:09:23.452 [2024-11-29T15:51:34.883Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.452 Verification LBA range: start 0x8000 length 0x8000 00:09:23.452 Nvme2n2 : 5.51 235.45 14.72 0.00 0.00 501178.17 11645.24 871124.68 00:09:23.452 [2024-11-29T15:51:34.883Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.452 Verification LBA range: start 0x0 length 0x8000 00:09:23.452 Nvme2n3 : 5.49 260.46 16.28 0.00 0.00 449988.93 23391.31 625919.21 00:09:23.452 [2024-11-29T15:51:34.883Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.452 Verification LBA range: start 0x8000 length 0x8000 00:09:23.452 Nvme2n3 : 5.52 244.32 15.27 0.00 0.00 477020.15 8721.33 864671.90 00:09:23.452 [2024-11-29T15:51:34.883Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.452 Verification LBA range: start 0x0 length 0x2000 00:09:23.452 Nvme3n1 : 5.51 284.46 17.78 0.00 0.00 407665.80 2041.70 493637.32 00:09:23.452 [2024-11-29T15:51:34.884Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.453 Verification LBA range: start 0x2000 length 0x2000 00:09:23.453 Nvme3n1 : 5.54 266.85 16.68 0.00 0.00 430416.99 1373.74 884030.23 00:09:23.453 [2024-11-29T15:51:34.884Z] =================================================================================================================== 00:09:23.453 [2024-11-29T15:51:34.884Z] Total : 3394.23 212.14 0.00 0.00 494553.00 1373.74 884030.23 00:09:24.019 00:09:24.019 real 0m8.046s 00:09:24.019 user 0m15.096s 00:09:24.019 sys 0m0.222s 00:09:24.019 15:51:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:24.019 ************************************ 00:09:24.019 END TEST bdev_verify_big_io 00:09:24.019 ************************************ 00:09:24.019 15:51:35 -- common/autotest_common.sh@10 -- # set +x 00:09:24.019 15:51:35 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:24.019 15:51:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:24.020 15:51:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.020 15:51:35 -- common/autotest_common.sh@10 -- # set +x 00:09:24.020 ************************************ 00:09:24.020 START TEST bdev_write_zeroes 00:09:24.020 ************************************ 00:09:24.020 15:51:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:24.020 [2024-11-29 15:51:35.331068] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:24.020 [2024-11-29 15:51:35.331149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62559 ] 00:09:24.280 [2024-11-29 15:51:35.474054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.280 [2024-11-29 15:51:35.649506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.849 Running I/O for 1 seconds... 00:09:26.226 00:09:26.226 Latency(us) 00:09:26.226 [2024-11-29T15:51:37.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.226 [2024-11-29T15:51:37.657Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.226 Nvme0n1p1 : 1.01 9633.74 37.63 0.00 0.00 13257.76 6427.57 25811.10 00:09:26.226 [2024-11-29T15:51:37.657Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.226 Nvme0n1p2 : 1.01 9660.32 37.74 0.00 0.00 13198.26 6301.54 27827.59 00:09:26.226 [2024-11-29T15:51:37.657Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.226 Nvme1n1 : 1.01 9649.35 37.69 0.00 0.00 13186.52 8519.68 20870.70 00:09:26.226 [2024-11-29T15:51:37.657Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.226 Nvme2n1 : 1.02 9638.40 37.65 0.00 0.00 13179.71 8670.92 19660.80 00:09:26.226 [2024-11-29T15:51:37.657Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.226 Nvme2n2 : 1.02 9627.57 37.61 0.00 0.00 13173.89 9023.80 20265.75 00:09:26.226 [2024-11-29T15:51:37.657Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.226 Nvme2n3 : 1.02 9553.90 37.32 0.00 0.00 13257.51 9427.10 23391.31 00:09:26.226 [2024-11-29T15:51:37.657Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.226 Nvme3n1 : 1.02 9543.12 37.28 0.00 0.00 13248.31 9275.86 22685.54 00:09:26.226 [2024-11-29T15:51:37.657Z] =================================================================================================================== 00:09:26.226 [2024-11-29T15:51:37.657Z] Total : 67306.41 262.92 0.00 0.00 13214.47 6301.54 27827.59 00:09:26.799 00:09:26.799 real 0m2.786s 00:09:26.799 user 0m2.495s 00:09:26.799 sys 0m0.179s 00:09:26.799 15:51:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:26.799 ************************************ 00:09:26.799 END TEST bdev_write_zeroes 00:09:26.799 ************************************ 00:09:26.799 15:51:38 -- common/autotest_common.sh@10 -- # set +x 00:09:26.799 15:51:38 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:26.799 15:51:38 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:26.799 15:51:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.799 15:51:38 -- common/autotest_common.sh@10 -- # set +x 00:09:26.799 ************************************ 00:09:26.799 START TEST bdev_json_nonenclosed 00:09:26.799 ************************************ 00:09:26.799 15:51:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:26.799 [2024-11-29 15:51:38.186443] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:26.799 [2024-11-29 15:51:38.186552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62607 ] 00:09:27.060 [2024-11-29 15:51:38.336991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.322 [2024-11-29 15:51:38.514107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.322 [2024-11-29 15:51:38.514250] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:27.322 [2024-11-29 15:51:38.514273] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:27.583 00:09:27.583 real 0m0.668s 00:09:27.583 user 0m0.460s 00:09:27.583 sys 0m0.103s 00:09:27.583 ************************************ 00:09:27.583 END TEST bdev_json_nonenclosed 00:09:27.584 ************************************ 00:09:27.584 15:51:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:27.584 15:51:38 -- common/autotest_common.sh@10 -- # set +x 00:09:27.584 15:51:38 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:27.584 15:51:38 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:27.584 15:51:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.584 15:51:38 -- common/autotest_common.sh@10 -- # set +x 00:09:27.584 ************************************ 00:09:27.584 START TEST bdev_json_nonarray 00:09:27.584 ************************************ 00:09:27.584 15:51:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:27.584 [2024-11-29 15:51:38.914482] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:27.584 [2024-11-29 15:51:38.914589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62638 ] 00:09:27.845 [2024-11-29 15:51:39.064096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.845 [2024-11-29 15:51:39.239946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.845 [2024-11-29 15:51:39.240116] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:27.845 [2024-11-29 15:51:39.240134] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:28.106 00:09:28.106 real 0m0.666s 00:09:28.106 user 0m0.465s 00:09:28.106 sys 0m0.096s 00:09:28.106 ************************************ 00:09:28.106 END TEST bdev_json_nonarray 00:09:28.106 15:51:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:28.106 15:51:39 -- common/autotest_common.sh@10 -- # set +x 00:09:28.106 ************************************ 00:09:28.368 15:51:39 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:09:28.368 15:51:39 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:09:28.368 15:51:39 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:28.368 15:51:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:28.368 15:51:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.368 15:51:39 -- common/autotest_common.sh@10 -- # set +x 00:09:28.368 ************************************ 00:09:28.368 START TEST bdev_gpt_uuid 00:09:28.368 ************************************ 00:09:28.368 15:51:39 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:09:28.368 15:51:39 -- bdev/blockdev.sh@612 -- # local bdev 00:09:28.368 15:51:39 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:09:28.369 15:51:39 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62663 00:09:28.369 15:51:39 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:28.369 15:51:39 -- bdev/blockdev.sh@47 -- # waitforlisten 62663 00:09:28.369 15:51:39 -- common/autotest_common.sh@829 -- # '[' -z 62663 ']' 00:09:28.369 15:51:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.369 15:51:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.369 15:51:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.369 15:51:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.369 15:51:39 -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 15:51:39 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:28.369 [2024-11-29 15:51:39.646107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:28.369 [2024-11-29 15:51:39.646221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62663 ] 00:09:28.369 [2024-11-29 15:51:39.794840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.629 [2024-11-29 15:51:39.974224] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:28.629 [2024-11-29 15:51:39.974434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.022 15:51:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.022 15:51:41 -- common/autotest_common.sh@862 -- # return 0 00:09:30.022 15:51:41 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:30.022 15:51:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.022 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:09:30.022 Some configs were skipped because the RPC state that can call them passed over. 00:09:30.022 15:51:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.022 15:51:41 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:09:30.022 15:51:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.022 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:09:30.283 15:51:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.283 15:51:41 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:30.283 15:51:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.283 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:09:30.283 15:51:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.283 15:51:41 -- bdev/blockdev.sh@619 -- # bdev='[ 00:09:30.283 { 00:09:30.283 "name": "Nvme0n1p1", 00:09:30.283 "aliases": [ 00:09:30.283 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:30.283 ], 00:09:30.283 "product_name": "GPT Disk", 00:09:30.283 "block_size": 4096, 00:09:30.283 "num_blocks": 774144, 00:09:30.283 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:30.283 "md_size": 64, 00:09:30.283 "md_interleave": false, 00:09:30.283 "dif_type": 0, 00:09:30.283 "assigned_rate_limits": { 00:09:30.283 "rw_ios_per_sec": 0, 00:09:30.283 "rw_mbytes_per_sec": 0, 00:09:30.283 "r_mbytes_per_sec": 0, 00:09:30.283 "w_mbytes_per_sec": 0 00:09:30.283 }, 00:09:30.283 "claimed": false, 00:09:30.283 "zoned": false, 00:09:30.283 "supported_io_types": { 00:09:30.283 "read": true, 00:09:30.283 "write": true, 00:09:30.283 "unmap": true, 00:09:30.283 "write_zeroes": true, 00:09:30.283 "flush": true, 00:09:30.283 "reset": true, 00:09:30.283 "compare": true, 00:09:30.283 "compare_and_write": false, 00:09:30.283 "abort": true, 00:09:30.283 "nvme_admin": false, 00:09:30.283 "nvme_io": false 00:09:30.283 }, 00:09:30.283 "driver_specific": { 00:09:30.283 "gpt": { 00:09:30.283 "base_bdev": "Nvme0n1", 00:09:30.283 "offset_blocks": 256, 00:09:30.283 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:30.283 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:30.283 "partition_name": "SPDK_TEST_first" 00:09:30.283 } 00:09:30.283 } 00:09:30.283 } 00:09:30.283 ]' 00:09:30.283 15:51:41 -- bdev/blockdev.sh@620 -- # jq -r length 00:09:30.283 15:51:41 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:09:30.283 15:51:41 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:09:30.283 15:51:41 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:30.283 15:51:41 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:30.283 15:51:41 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:30.283 15:51:41 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:30.283 15:51:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.283 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:09:30.283 15:51:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.283 15:51:41 -- bdev/blockdev.sh@624 -- # bdev='[ 00:09:30.283 { 00:09:30.283 "name": "Nvme0n1p2", 00:09:30.283 "aliases": [ 00:09:30.283 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:30.283 ], 00:09:30.283 "product_name": "GPT Disk", 00:09:30.283 "block_size": 4096, 00:09:30.283 "num_blocks": 774143, 00:09:30.283 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:30.283 "md_size": 64, 00:09:30.283 "md_interleave": false, 00:09:30.283 "dif_type": 0, 00:09:30.283 "assigned_rate_limits": { 00:09:30.283 "rw_ios_per_sec": 0, 00:09:30.283 "rw_mbytes_per_sec": 0, 00:09:30.283 "r_mbytes_per_sec": 0, 00:09:30.283 "w_mbytes_per_sec": 0 00:09:30.283 }, 00:09:30.283 "claimed": false, 00:09:30.283 "zoned": false, 00:09:30.283 "supported_io_types": { 00:09:30.283 "read": true, 00:09:30.283 "write": true, 00:09:30.283 "unmap": true, 00:09:30.283 "write_zeroes": true, 00:09:30.283 "flush": true, 00:09:30.283 "reset": true, 00:09:30.283 "compare": true, 00:09:30.283 "compare_and_write": false, 00:09:30.283 "abort": true, 00:09:30.283 "nvme_admin": false, 00:09:30.283 "nvme_io": false 00:09:30.283 }, 00:09:30.283 "driver_specific": { 00:09:30.283 "gpt": { 00:09:30.283 "base_bdev": "Nvme0n1", 00:09:30.283 "offset_blocks": 774400, 00:09:30.283 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:30.283 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:30.283 "partition_name": "SPDK_TEST_second" 00:09:30.283 } 00:09:30.283 } 00:09:30.283 } 00:09:30.283 ]' 00:09:30.283 15:51:41 -- bdev/blockdev.sh@625 -- # jq -r length 00:09:30.283 15:51:41 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:09:30.283 15:51:41 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:09:30.283 15:51:41 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:30.283 15:51:41 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:30.283 15:51:41 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:30.283 15:51:41 -- bdev/blockdev.sh@629 -- # killprocess 62663 00:09:30.283 15:51:41 -- common/autotest_common.sh@936 -- # '[' -z 62663 ']' 00:09:30.283 15:51:41 -- common/autotest_common.sh@940 -- # kill -0 62663 00:09:30.284 15:51:41 -- common/autotest_common.sh@941 -- # uname 00:09:30.284 15:51:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:30.284 15:51:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62663 00:09:30.284 15:51:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:30.284 killing process with pid 62663 00:09:30.284 15:51:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:30.284 15:51:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62663' 00:09:30.284 15:51:41 -- common/autotest_common.sh@955 -- # kill 62663 00:09:30.284 15:51:41 -- common/autotest_common.sh@960 -- # wait 62663 00:09:31.715 00:09:31.715 real 0m3.519s 00:09:31.715 user 0m3.778s 00:09:31.715 sys 0m0.378s 00:09:31.715 15:51:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.715 ************************************ 00:09:31.715 END TEST bdev_gpt_uuid 00:09:31.715 ************************************ 00:09:31.715 15:51:43 -- common/autotest_common.sh@10 -- # set +x 00:09:31.715 15:51:43 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:09:31.715 15:51:43 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:31.715 15:51:43 -- bdev/blockdev.sh@809 -- # cleanup 00:09:31.715 15:51:43 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:31.975 15:51:43 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:31.975 15:51:43 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:09:31.975 15:51:43 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:09:31.975 15:51:43 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:09:31.975 15:51:43 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:32.232 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.232 Waiting for block devices as requested 00:09:32.232 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.518 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.518 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.518 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.815 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:37.816 15:51:48 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:37.816 15:51:48 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:37.816 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:37.816 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:37.816 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:37.816 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:37.816 15:51:49 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:37.816 00:09:37.816 real 0m59.161s 00:09:37.816 user 1m16.499s 00:09:37.816 sys 0m7.834s 00:09:37.816 15:51:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.816 ************************************ 00:09:37.816 END TEST blockdev_nvme_gpt 00:09:37.816 ************************************ 00:09:37.816 15:51:49 -- common/autotest_common.sh@10 -- # set +x 00:09:37.816 15:51:49 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:37.816 15:51:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:37.816 15:51:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.816 15:51:49 -- common/autotest_common.sh@10 -- # set +x 00:09:37.816 ************************************ 00:09:37.816 START TEST nvme 00:09:37.816 ************************************ 00:09:37.816 15:51:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:38.077 * Looking for test storage... 00:09:38.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:38.077 15:51:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:38.077 15:51:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:38.077 15:51:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:38.077 15:51:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:38.077 15:51:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:38.077 15:51:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:38.077 15:51:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:38.077 15:51:49 -- scripts/common.sh@335 -- # IFS=.-: 00:09:38.077 15:51:49 -- scripts/common.sh@335 -- # read -ra ver1 00:09:38.077 15:51:49 -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.077 15:51:49 -- scripts/common.sh@336 -- # read -ra ver2 00:09:38.077 15:51:49 -- scripts/common.sh@337 -- # local 'op=<' 00:09:38.077 15:51:49 -- scripts/common.sh@339 -- # ver1_l=2 00:09:38.077 15:51:49 -- scripts/common.sh@340 -- # ver2_l=1 00:09:38.077 15:51:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:38.077 15:51:49 -- scripts/common.sh@343 -- # case "$op" in 00:09:38.077 15:51:49 -- scripts/common.sh@344 -- # : 1 00:09:38.077 15:51:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:38.077 15:51:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.077 15:51:49 -- scripts/common.sh@364 -- # decimal 1 00:09:38.077 15:51:49 -- scripts/common.sh@352 -- # local d=1 00:09:38.077 15:51:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.077 15:51:49 -- scripts/common.sh@354 -- # echo 1 00:09:38.077 15:51:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:38.077 15:51:49 -- scripts/common.sh@365 -- # decimal 2 00:09:38.077 15:51:49 -- scripts/common.sh@352 -- # local d=2 00:09:38.077 15:51:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.077 15:51:49 -- scripts/common.sh@354 -- # echo 2 00:09:38.077 15:51:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:38.077 15:51:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:38.077 15:51:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:38.077 15:51:49 -- scripts/common.sh@367 -- # return 0 00:09:38.077 15:51:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.077 15:51:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:38.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.077 --rc genhtml_branch_coverage=1 00:09:38.077 --rc genhtml_function_coverage=1 00:09:38.077 --rc genhtml_legend=1 00:09:38.077 --rc geninfo_all_blocks=1 00:09:38.077 --rc geninfo_unexecuted_blocks=1 00:09:38.077 00:09:38.077 ' 00:09:38.077 15:51:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:38.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.077 --rc genhtml_branch_coverage=1 00:09:38.077 --rc genhtml_function_coverage=1 00:09:38.077 --rc genhtml_legend=1 00:09:38.077 --rc geninfo_all_blocks=1 00:09:38.077 --rc geninfo_unexecuted_blocks=1 00:09:38.077 00:09:38.077 ' 00:09:38.077 15:51:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:38.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.077 --rc genhtml_branch_coverage=1 00:09:38.077 --rc genhtml_function_coverage=1 00:09:38.077 --rc genhtml_legend=1 00:09:38.077 --rc geninfo_all_blocks=1 00:09:38.077 --rc geninfo_unexecuted_blocks=1 00:09:38.077 00:09:38.077 ' 00:09:38.077 15:51:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:38.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.077 --rc genhtml_branch_coverage=1 00:09:38.077 --rc genhtml_function_coverage=1 00:09:38.077 --rc genhtml_legend=1 00:09:38.077 --rc geninfo_all_blocks=1 00:09:38.077 --rc geninfo_unexecuted_blocks=1 00:09:38.077 00:09:38.077 ' 00:09:38.077 15:51:49 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:39.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.020 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.020 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.020 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.020 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.020 15:51:50 -- nvme/nvme.sh@79 -- # uname 00:09:39.020 15:51:50 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:39.020 15:51:50 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:39.020 15:51:50 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:39.020 15:51:50 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:39.020 15:51:50 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:09:39.020 15:51:50 -- common/autotest_common.sh@1055 -- # echo 0 00:09:39.020 Waiting for stub to ready for secondary processes... 00:09:39.020 15:51:50 -- common/autotest_common.sh@1057 -- # stubpid=63331 00:09:39.020 15:51:50 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:09:39.020 15:51:50 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:39.020 15:51:50 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63331 ]] 00:09:39.020 15:51:50 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:39.020 15:51:50 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:39.281 [2024-11-29 15:51:50.458900] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:39.281 [2024-11-29 15:51:50.459012] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.852 [2024-11-29 15:51:51.211718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:40.113 [2024-11-29 15:51:51.379060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.113 [2024-11-29 15:51:51.379511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.113 [2024-11-29 15:51:51.379527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.113 [2024-11-29 15:51:51.408703] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:40.113 [2024-11-29 15:51:51.422311] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:40.113 [2024-11-29 15:51:51.422522] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:40.113 15:51:51 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:40.113 15:51:51 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63331 ]] 00:09:40.113 15:51:51 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:40.113 [2024-11-29 15:51:51.434819] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:40.113 [2024-11-29 15:51:51.434963] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:40.113 [2024-11-29 15:51:51.435072] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:40.113 [2024-11-29 15:51:51.442017] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:40.113 [2024-11-29 15:51:51.442137] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:40.113 [2024-11-29 15:51:51.442221] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:40.113 [2024-11-29 15:51:51.449648] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:40.113 [2024-11-29 15:51:51.449786] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:40.113 [2024-11-29 15:51:51.449884] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:40.113 [2024-11-29 15:51:51.449963] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:40.113 [2024-11-29 15:51:51.450074] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:41.057 done. 00:09:41.057 15:51:52 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:41.057 15:51:52 -- common/autotest_common.sh@1064 -- # echo done. 00:09:41.057 15:51:52 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:41.057 15:51:52 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:41.057 15:51:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:41.057 15:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:41.057 ************************************ 00:09:41.057 START TEST nvme_reset 00:09:41.057 ************************************ 00:09:41.057 15:51:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:41.318 Initializing NVMe Controllers 00:09:41.318 Skipping QEMU NVMe SSD at 0000:00:09.0 00:09:41.318 Skipping QEMU NVMe SSD at 0000:00:06.0 00:09:41.318 Skipping QEMU NVMe SSD at 0000:00:07.0 00:09:41.318 Skipping QEMU NVMe SSD at 0000:00:08.0 00:09:41.318 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:41.318 00:09:41.318 real 0m0.190s 00:09:41.318 user 0m0.058s 00:09:41.318 sys 0m0.093s 00:09:41.318 15:51:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:41.318 ************************************ 00:09:41.318 END TEST nvme_reset 00:09:41.318 ************************************ 00:09:41.318 15:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:41.318 15:51:52 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:41.318 15:51:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:41.318 15:51:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:41.318 15:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:41.318 ************************************ 00:09:41.318 START TEST nvme_identify 00:09:41.318 ************************************ 00:09:41.318 15:51:52 -- common/autotest_common.sh@1114 -- # nvme_identify 00:09:41.318 15:51:52 -- nvme/nvme.sh@12 -- # bdfs=() 00:09:41.318 15:51:52 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:41.318 15:51:52 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:41.318 15:51:52 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:41.318 15:51:52 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:41.318 15:51:52 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:41.318 15:51:52 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:41.318 15:51:52 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:41.318 15:51:52 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:41.583 15:51:52 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:41.583 15:51:52 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:41.583 15:51:52 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:41.583 [2024-11-29 15:51:52.936559] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 63373 terminated===================================================== 00:09:41.583 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:41.583 ===================================================== 00:09:41.583 Controller Capabilities/Features 00:09:41.583 ================================ 00:09:41.583 Vendor ID: 1b36 00:09:41.583 Subsystem Vendor ID: 1af4 00:09:41.583 Serial Number: 12343 00:09:41.583 Model Number: QEMU NVMe Ctrl 00:09:41.583 Firmware Version: 8.0.0 00:09:41.583 Recommended Arb Burst: 6 00:09:41.583 IEEE OUI Identifier: 00 54 52 00:09:41.583 Multi-path I/O 00:09:41.583 May have multiple subsystem ports: No 00:09:41.583 May have multiple controllers: Yes 00:09:41.583 Associated with SR-IOV VF: No 00:09:41.583 Max Data Transfer Size: 524288 00:09:41.583 Max Number of Namespaces: 256 00:09:41.583 Max Number of I/O Queues: 64 00:09:41.583 NVMe Specification Version (VS): 1.4 00:09:41.583 NVMe Specification Version (Identify): 1.4 00:09:41.583 Maximum Queue Entries: 2048 00:09:41.583 Contiguous Queues Required: Yes 00:09:41.583 Arbitration Mechanisms Supported 00:09:41.583 Weighted Round Robin: Not Supported 00:09:41.583 Vendor Specific: Not Supported 00:09:41.583 Reset Timeout: 7500 ms 00:09:41.583 Doorbell Stride: 4 bytes 00:09:41.583 NVM Subsystem Reset: Not Supported 00:09:41.583 Command Sets Supported 00:09:41.583 NVM Command Set: Supported 00:09:41.583 Boot Partition: Not Supported 00:09:41.583 Memory Page Size Minimum: 4096 bytes 00:09:41.583 Memory Page Size Maximum: 65536 bytes 00:09:41.583 Persistent Memory Region: Not Supported 00:09:41.583 Optional Asynchronous Events Supported 00:09:41.583 Namespace Attribute Notices: Supported 00:09:41.583 Firmware Activation Notices: Not Supported 00:09:41.583 ANA Change Notices: Not Supported 00:09:41.583 PLE Aggregate Log Change Notices: Not Supported 00:09:41.583 LBA Status Info Alert Notices: Not Supported 00:09:41.583 EGE Aggregate Log Change Notices: Not Supported 00:09:41.583 Normal NVM Subsystem Shutdown event: Not Supported 00:09:41.583 Zone Descriptor Change Notices: Not Supported 00:09:41.583 Discovery Log Change Notices: Not Supported 00:09:41.583 Controller Attributes 00:09:41.583 128-bit Host Identifier: Not Supported 00:09:41.583 Non-Operational Permissive Mode: Not Supported 00:09:41.583 NVM Sets: Not Supported 00:09:41.583 Read Recovery Levels: Not Supported 00:09:41.583 Endurance Groups: Supported 00:09:41.583 Predictable Latency Mode: Not Supported 00:09:41.583 Traffic Based Keep ALive: Not Supported 00:09:41.583 Namespace Granularity: Not Supported 00:09:41.583 SQ Associations: Not Supported 00:09:41.583 UUID List: Not Supported 00:09:41.583 Multi-Domain Subsystem: Not Supported 00:09:41.583 Fixed Capacity Management: Not Supported 00:09:41.583 Variable Capacity Management: Not Supported 00:09:41.583 Delete Endurance Group: Not Supported 00:09:41.583 Delete NVM Set: Not Supported 00:09:41.583 Extended LBA Formats Supported: Supported 00:09:41.583 Flexible Data Placement Supported: Supported 00:09:41.583 00:09:41.583 Controller Memory Buffer Support 00:09:41.583 ================================ 00:09:41.583 Supported: No 00:09:41.583 00:09:41.583 Persistent Memory Region Support 00:09:41.583 ================================ 00:09:41.583 Supported: No 00:09:41.583 00:09:41.583 Admin Command Set Attributes 00:09:41.583 ============================ 00:09:41.583 Security Send/Receive: Not Supported 00:09:41.583 Format NVM: Supported 00:09:41.583 Firmware Activate/Download: Not Supported 00:09:41.583 Namespace Management: Supported 00:09:41.583 Device Self-Test: Not Supported 00:09:41.583 Directives: Supported 00:09:41.583 NVMe-MI: Not Supported 00:09:41.583 Virtualization Management: Not Supported 00:09:41.583 Doorbell Buffer Config: Supported 00:09:41.583 Get LBA Status Capability: Not Supported 00:09:41.583 Command & Feature Lockdown Capability: Not Supported 00:09:41.584 Abort Command Limit: 4 00:09:41.584 Async Event Request Limit: 4 00:09:41.584 Number of Firmware Slots: N/A 00:09:41.584 Firmware Slot 1 Read-Only: N/A 00:09:41.584 Firmware Activation Without Reset: N/A 00:09:41.584 Multiple Update Detection Support: N/A 00:09:41.584 Firmware Update Granularity: No Information Provided 00:09:41.584 Per-Namespace SMART Log: Yes 00:09:41.584 Asymmetric Namespace Access Log Page: Not Supported 00:09:41.584 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:41.584 Command Effects Log Page: Supported 00:09:41.584 Get Log Page Extended Data: Supported 00:09:41.584 Telemetry Log Pages: Not Supported 00:09:41.584 Persistent Event Log Pages: Not Supported 00:09:41.584 Supported Log Pages Log Page: May Support 00:09:41.584 Commands Supported & Effects Log Page: Not Supported 00:09:41.584 Feature Identifiers & Effects Log Page:May Support 00:09:41.584 NVMe-MI Commands & Effects Log Page: May Support 00:09:41.584 Data Area 4 for Telemetry Log: Not Supported 00:09:41.584 Error Log Page Entries Supported: 1 00:09:41.584 Keep Alive: Not Supported 00:09:41.584 00:09:41.584 NVM Command Set Attributes 00:09:41.584 ========================== 00:09:41.584 Submission Queue Entry Size 00:09:41.584 Max: 64 00:09:41.584 Min: 64 00:09:41.584 Completion Queue Entry Size 00:09:41.584 Max: 16 00:09:41.584 Min: 16 00:09:41.584 Number of Namespaces: 256 00:09:41.584 Compare Command: Supported 00:09:41.584 Write Uncorrectable Command: Not Supported 00:09:41.584 Dataset Management Command: Supported 00:09:41.584 Write Zeroes Command: Supported 00:09:41.584 Set Features Save Field: Supported 00:09:41.584 Reservations: Not Supported 00:09:41.584 Timestamp: Supported 00:09:41.584 Copy: Supported 00:09:41.584 Volatile Write Cache: Present 00:09:41.584 Atomic Write Unit (Normal): 1 00:09:41.584 Atomic Write Unit (PFail): 1 00:09:41.584 Atomic Compare & Write Unit: 1 00:09:41.584 Fused Compare & Write: Not Supported 00:09:41.584 Scatter-Gather List 00:09:41.584 SGL Command Set: Supported 00:09:41.584 SGL Keyed: Not Supported 00:09:41.584 SGL Bit Bucket Descriptor: Not Supported 00:09:41.584 SGL Metadata Pointer: Not Supported 00:09:41.584 Oversized SGL: Not Supported 00:09:41.584 SGL Metadata Address: Not Supported 00:09:41.584 SGL Offset: Not Supported 00:09:41.584 Transport SGL Data Block: Not Supported 00:09:41.584 Replay Protected Memory Block: Not Supported 00:09:41.584 00:09:41.584 Firmware Slot Information 00:09:41.584 ========================= 00:09:41.584 Active slot: 1 00:09:41.584 Slot 1 Firmware Revision: 1.0 00:09:41.584 00:09:41.584 00:09:41.584 Commands Supported and Effects 00:09:41.584 ============================== 00:09:41.584 Admin Commands 00:09:41.584 -------------- 00:09:41.584 Delete I/O Submission Queue (00h): Supported 00:09:41.584 Create I/O Submission Queue (01h): Supported 00:09:41.584 Get Log Page (02h): Supported 00:09:41.584 Delete I/O Completion Queue (04h): Supported 00:09:41.584 Create I/O Completion Queue (05h): Supported 00:09:41.584 Identify (06h): Supported 00:09:41.584 Abort (08h): Supported 00:09:41.584 Set Features (09h): Supported 00:09:41.584 Get Features (0Ah): Supported 00:09:41.584 Asynchronous Event Request (0Ch): Supported 00:09:41.584 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:41.584 Directive Send (19h): Supported 00:09:41.584 Directive Receive (1Ah): Supported 00:09:41.584 Virtualization Management (1Ch): Supported 00:09:41.584 Doorbell Buffer Config (7Ch): Supported 00:09:41.584 Format NVM (80h): Supported LBA-Change 00:09:41.584 I/O Commands 00:09:41.584 ------------ 00:09:41.584 Flush (00h): Supported LBA-Change 00:09:41.584 Write (01h): Supported LBA-Change 00:09:41.584 Read (02h): Supported 00:09:41.584 Compare (05h): Supported 00:09:41.584 Write Zeroes (08h): Supported LBA-Change 00:09:41.584 Dataset Management (09h): Supported LBA-Change 00:09:41.584 Unknown (0Ch): Supported 00:09:41.584 Unknown (12h): Supported 00:09:41.584 Copy (19h): Supported LBA-Change 00:09:41.584 Unknown (1Dh): Supported LBA-Change 00:09:41.584 00:09:41.584 Error Log 00:09:41.584 ========= 00:09:41.584 00:09:41.584 Arbitration 00:09:41.584 =========== 00:09:41.584 Arbitration Burst: no limit 00:09:41.584 00:09:41.584 Power Management 00:09:41.584 ================ 00:09:41.584 Number of Power States: 1 00:09:41.584 Current Power State: Power State #0 00:09:41.584 Power State #0: 00:09:41.584 Max Power: 25.00 W 00:09:41.584 Non-Operational State: Operational 00:09:41.584 Entry Latency: 16 microseconds 00:09:41.584 Exit Latency: 4 microseconds 00:09:41.584 Relative Read Throughput: 0 00:09:41.584 Relative Read Latency: 0 00:09:41.584 Relative Write Throughput: 0 00:09:41.584 Relative Write Latency: 0 00:09:41.584 Idle Power: Not Reported 00:09:41.584 Active Power: Not Reported 00:09:41.584 Non-Operational Permissive Mode: Not Supported 00:09:41.584 00:09:41.584 Health Information 00:09:41.584 ================== 00:09:41.584 Critical Warnings: 00:09:41.584 Available Spare Space: OK 00:09:41.584 Temperature: OK 00:09:41.584 Device Reliability: OK 00:09:41.584 Read Only: No 00:09:41.584 Volatile Memory Backup: OK 00:09:41.584 Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.584 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:41.584 Available Spare: 0% 00:09:41.584 Available Spare Threshold: 0% 00:09:41.584 Life Percentage Used: 0% 00:09:41.584 Data Units Read: 1441 00:09:41.584 Data Units Written: 672 00:09:41.584 Host Read Commands: 66480 00:09:41.584 Host Write Commands: 32806 00:09:41.584 Controller Busy Time: 0 minutes 00:09:41.584 Power Cycles: 0 00:09:41.584 Power On Hours: 0 hours 00:09:41.584 Unsafe Shutdowns: 0 00:09:41.584 Unrecoverable Media Errors: 0 00:09:41.584 Lifetime Error Log Entries: 0 00:09:41.584 Warning Temperature Time: 0 minutes 00:09:41.584 Critical Temperature Time: 0 minutes 00:09:41.584 00:09:41.584 Number of Queues 00:09:41.584 ================ 00:09:41.584 Number of I/O Submission Queues: 64 00:09:41.584 Number of I/O Completion Queues: 64 00:09:41.584 00:09:41.584 ZNS Specific Controller Data 00:09:41.584 ============================ 00:09:41.584 Zone Append Size Limit: 0 00:09:41.584 00:09:41.584 00:09:41.584 Active Namespaces 00:09:41.584 ================= 00:09:41.584 Namespace ID:1 00:09:41.584 Error Recovery Timeout: Unlimited 00:09:41.584 Command Set Identifier: NVM (00h) 00:09:41.584 Deallocate: Supported 00:09:41.584 Deallocated/Unwritten Error: Supported 00:09:41.584 Deallocated Read Value: All 0x00 00:09:41.584 Deallocate in Write Zeroes: Not Supported 00:09:41.584 Deallocated Guard Field: 0xFFFF 00:09:41.584 Flush: Supported 00:09:41.584 Reservation: Not Supported 00:09:41.584 Namespace Sharing Capabilities: Multiple Controllers 00:09:41.584 Size (in LBAs): 262144 (1GiB) 00:09:41.584 Capacity (in LBAs): 262144 (1GiB) 00:09:41.584 Utilization (in LBAs): 262144 (1GiB) 00:09:41.584 Thin Provisioning: Not Supported 00:09:41.584 Per-NS Atomic Units: No 00:09:41.584 Maximum Single Source Range Length: 128 00:09:41.584 Maximum Copy Length: 128 00:09:41.584 Maximum Source Range Count: 128 00:09:41.584 NGUID/EUI64 Never Reused: No 00:09:41.584 Namespace Write Protected: No 00:09:41.584 Endurance group ID: 1 00:09:41.584 Number of LBA Formats: 8 00:09:41.584 Current LBA Format: LBA Format #04 00:09:41.584 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.584 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.584 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.584 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.584 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.584 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.584 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.584 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.584 00:09:41.584 Get Feature FDP: 00:09:41.584 ================ 00:09:41.584 Enabled: Yes 00:09:41.584 FDP configuration index: 0 00:09:41.584 00:09:41.584 FDP configurations log page 00:09:41.584 =========================== 00:09:41.584 Number of FDP configurations: 1 00:09:41.584 Version: 0 00:09:41.584 Size: 112 00:09:41.584 FDP Configuration Descriptor: 0 00:09:41.584 Descriptor Size: 96 00:09:41.584 Reclaim Group Identifier format: 2 00:09:41.584 FDP Volatile Write Cache: Not Present 00:09:41.584 FDP Configuration: Valid 00:09:41.584 Vendor Specific Size: 0 00:09:41.584 Number of Reclaim Groups: 2 00:09:41.584 Number of Recalim Unit Handles: 8 00:09:41.584 Max Placement Identifiers: 128 00:09:41.584 Number of Namespaces Suppprted: 256 00:09:41.584 Reclaim unit Nominal Size: 6000000 bytes 00:09:41.584 Estimated Reclaim Unit Time Limit: Not Reported 00:09:41.585 RUH Desc #000: RUH Type: Initially Isolated 00:09:41.585 RUH Desc #001: RUH Type: Initially Isolated 00:09:41.585 RUH Desc #002: RUH Type: Initially Isolated 00:09:41.585 RUH Desc #003: RUH Type: Initially Isolated 00:09:41.585 RUH Desc #004: RUH Type: Initially Isolated 00:09:41.585 RUH Desc #005: RUH Type: Initially Isolated 00:09:41.585 RUH Desc #006: RUH Type: Initially Isolated 00:09:41.585 RUH Desc #007: RUH Type: Initially Isolated 00:09:41.585 00:09:41.585 FDP reclaim unit handle usage log page 00:09:41.585 ====================================== 00:09:41.585 Number of Reclaim Unit Handles: 8 00:09:41.585 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:41.585 RUH Usage Desc #001: RUH Attributes: Unused 00:09:41.585 RUH Usage Desc #002: RUH Attributes: Unused 00:09:41.585 RUH Usage Desc #003: RUH Attributes: Unused 00:09:41.585 RUH Usage Desc #004: RUH Attributes: Unused 00:09:41.585 RUH Usage Desc #005: RUH Attributes: Unused 00:09:41.585 RUH Usage Desc #006: RUH Attributes: Unused 00:09:41.585 RUH Usage Desc #007: RUH Attributes: Unused 00:09:41.585 00:09:41.585 FDP statistics log page 00:09:41.585 ======================= 00:09:41.585 Host bytes with metadata written: 449896448 00:09:41.585 Media bytes with metadata written: 449966080 00:09:41.585 Media bytes erased: 0 00:09:41.585 00:09:41.585 FDP events log page 00:09:41.585 =================== 00:09:41.585 Number of FDP events: 0 00:09:41.585 00:09:41.585 ===================================================== 00:09:41.585 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:41.585 ===================================================== 00:09:41.585 Controller Capabilities/Features 00:09:41.585 ================================ 00:09:41.585 Vendor ID: 1b36 00:09:41.585 Subsystem Vendor ID: 1af4 00:09:41.585 Serial Number: 12340 00:09:41.585 Model Number: QEMU NVMe Ctrl 00:09:41.585 Firmware Version: 8.0.0 00:09:41.585 Recommended Arb Burst: 6 00:09:41.585 IEEE OUI Identifier: 00 54 52 00:09:41.585 Multi-path I/O 00:09:41.585 May have multiple subsystem ports: No 00:09:41.585 May have multiple controllers: No 00:09:41.585 Associated with SR-IOV VF: No 00:09:41.585 Max Data Transfer Size: 524288 00:09:41.585 Max Number of Namespaces: 256 00:09:41.585 Max Number of I/O Queues: 64 00:09:41.585 NVMe Specification Version (VS): 1.4 00:09:41.585 NVMe Specification Version (Identify): 1.4 00:09:41.585 Maximum Queue Entries: 2048 00:09:41.585 Contiguous Queues Required: Yes 00:09:41.585 Arbitration Mechanisms Supported 00:09:41.585 Weighted Round Robin: Not Supported 00:09:41.585 Vendor Specific: Not Supported 00:09:41.585 Reset Timeout: 7500 ms 00:09:41.585 Doorbell Stride: 4 bytes 00:09:41.585 NVM Subsystem Reset: Not Supported 00:09:41.585 Command Sets Supported 00:09:41.585 NVM Command Set: Supported 00:09:41.585 Boot Partition: Not Supported 00:09:41.585 Memory Page Size Minimum: 4096 bytes 00:09:41.585 Memory Page Size Maximum: 65536 bytes 00:09:41.585 Persistent Memory Region: Not Supported 00:09:41.585 Optional Asynchronous Events Supported 00:09:41.585 Namespace Attribute Notices: Supported 00:09:41.585 Firmware Activation Notices: Not Supported 00:09:41.585 ANA Change Notices: Not Supported 00:09:41.585 PLE Aggregate Log Change Notices: Not Supported 00:09:41.585 LBA Status Info Alert Notices: Not Supported 00:09:41.585 EGE Aggregate Log Change Notices: Not Supported 00:09:41.585 Normal NVM Subsystem Shutdown event: Not Supported 00:09:41.585 Zone Descriptor Change Notices: Not Supported 00:09:41.585 Discovery Log Change Notices: Not Supported 00:09:41.585 Controller Attributes 00:09:41.585 128-bit Host Identifier: Not Supported 00:09:41.585 Non-Operational Permissive Mode: Not Supported 00:09:41.585 NVM Sets: Not Supported 00:09:41.585 Read Recovery Levels: Not Supported 00:09:41.585 Endurance Groups: Not Supported 00:09:41.585 Predictable Latency Mode: Not Supported 00:09:41.585 Traffic Based Keep ALive: Not Supported 00:09:41.585 Namespace Granularity: Not Supported 00:09:41.585 SQ Associations: Not Supported 00:09:41.585 UUID List: Not Supported 00:09:41.585 Multi-Domain Subsystem: Not Supported 00:09:41.585 Fixed Capacity Management: Not Supported 00:09:41.585 Variable Capacity Management: Not Supported 00:09:41.585 Delete Endurance Group: Not Supported 00:09:41.585 Delete NVM Set: Not Supported 00:09:41.585 Extended LBA Formats Supported: Supported 00:09:41.585 Flexible Data Placement Supported: Not Supported 00:09:41.585 00:09:41.585 Controller Memory Buffer Support 00:09:41.585 ================================ 00:09:41.585 Supported: No 00:09:41.585 00:09:41.585 Persistent Memory Region Support 00:09:41.585 ================================ 00:09:41.585 Supported: No 00:09:41.585 00:09:41.585 Admin Command Set Attributes 00:09:41.585 ============================ 00:09:41.585 Security Send/Receive: Not Supported 00:09:41.585 Format NVM: Supported 00:09:41.585 Firmware Activate/Download: Not Supported 00:09:41.585 Namespace Management: Supported 00:09:41.585 Device Self-Test: Not Supported 00:09:41.585 Directives: Supported 00:09:41.585 NVMe-MI: Not Supported 00:09:41.585 Virtualization Management: Not Supported 00:09:41.585 Doorbell Buffer Config: Supported 00:09:41.585 Get LBA Status Capability: Not Supported 00:09:41.585 Command & Feature Lockdown Capability: Not Supported 00:09:41.585 Abort Command Limit: 4 00:09:41.585 Async Event Request Limit: 4 00:09:41.585 Number of Firmware Slots: N/A 00:09:41.585 Firmware Slot 1 Read-Only: N/A 00:09:41.585 Firmware Activation Without Reset: N/A 00:09:41.585 Multiple Update Detection Support: N/A 00:09:41.585 Firmware Update Granularity: No Information Provided 00:09:41.585 Per-Namespace SMART Log: Yes 00:09:41.585 Asymmetric Namespace Access Log Page: Not Supported 00:09:41.585 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:41.585 Command Effects Log Page: Supported 00:09:41.585 Get Log Page Extended Data: Supported 00:09:41.585 Telemetry Log Pages: Not Supported 00:09:41.585 Persistent Event Log Pages: Not Supported 00:09:41.585 Supported Log Pages Log Page: May Support 00:09:41.585 Commands Supported & Effects Log Page: Not Supported 00:09:41.585 Feature Identifiers & Effects Log Page:May Support 00:09:41.585 NVMe-MI Commands & Effects Log Page: May Support 00:09:41.585 Data Area 4 for Telemetry Log: Not Supported 00:09:41.585 Error Log Page Entries Supported: 1 00:09:41.585 Keep Alive: Not Supported 00:09:41.585 00:09:41.585 NVM Command Set Attributes 00:09:41.585 ========================== 00:09:41.585 Submission Queue Entry Size 00:09:41.585 Max: 64 00:09:41.585 Min: 64 00:09:41.585 Completion Queue Entry Size 00:09:41.585 Max: 16 00:09:41.585 Min: 16 00:09:41.585 Number of Namespaces: 256 00:09:41.585 Compare Command: Supported 00:09:41.585 Write Uncorrectable Command: Not Supported 00:09:41.585 Dataset Management Command: Supported 00:09:41.585 Write Zeroes Command: Supported 00:09:41.585 Set Features Save Field: Supported 00:09:41.585 Reservations: Not Supported 00:09:41.585 Timestamp: Supported 00:09:41.585 Copy: Supported 00:09:41.585 Volatile Write Cache: Present 00:09:41.585 Atomic Write Unit (Normal): 1 00:09:41.585 Atomic Write Unit (PFail): 1 00:09:41.585 Atomic Compare & Write Unit: 1 00:09:41.585 Fused Compare & Write: Not Supported 00:09:41.585 Scatter-Gather List 00:09:41.585 SGL Command Set: Supported 00:09:41.585 SGL Keyed: Not Supported 00:09:41.585 SGL Bit Bucket Descriptor: Not Supported 00:09:41.585 SGL Metadata Pointer: Not Supported 00:09:41.585 Oversized SGL: Not Supported 00:09:41.585 SGL Metadata Address: Not Supported 00:09:41.585 SGL Offset: Not Supported 00:09:41.585 Transport SGL Data Block: Not Supported 00:09:41.585 Replay Protected Memory Block: Not Supported 00:09:41.585 00:09:41.585 Firmware Slot Information 00:09:41.585 ========================= 00:09:41.585 Active slot: 1 00:09:41.585 Slot 1 Firmware Revision: 1.0 00:09:41.585 00:09:41.585 00:09:41.585 Commands Supported and Effects 00:09:41.585 ============================== 00:09:41.585 Admin Commands 00:09:41.585 -------------- 00:09:41.585 Delete I/O Submission Queue (00h): Supported 00:09:41.585 Create I/O Submission Queue (01h): Supported 00:09:41.585 Get Log Page (02h): Supported 00:09:41.585 Delete I/O Completion Queue (04h): Supported 00:09:41.585 Create I/O Completion Queue (05h): Supported 00:09:41.585 Identify (06h): Supported 00:09:41.585 Abort (08h): Supported 00:09:41.585 Set Features (09h): Supported 00:09:41.585 Get Features (0Ah): Supported 00:09:41.585 Asynchronous Event Request (0Ch): Supported 00:09:41.585 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:41.585 Directive Send (19h): Supported 00:09:41.585 Directive Receive (1Ah): Supported 00:09:41.585 Virtualization Management (1Ch): Supported 00:09:41.585 Doorbell Buffer Config (7Ch): Supported 00:09:41.585 Format NVM (80h): Supported LBA-Change 00:09:41.585 I/O Commands 00:09:41.585 ------------ 00:09:41.585 Flush (00h): Supported LBA-Change 00:09:41.586 Write (01h): Supported LBA-Change 00:09:41.586 Read (02h): Supported 00:09:41.586 Compare (05h): Supported 00:09:41.586 Write Zeroes (08h): Supported LBA-Change 00:09:41.586 Dataset Management (09h): Supported LBA-Change 00:09:41.586 Unknown (0Ch): Supported 00:09:41.586 unexpected 00:09:41.586 [2024-11-29 15:51:52.938665] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 63373 terminated unexpected 00:09:41.586 [2024-11-29 15:51:52.940584] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 63373 terminated unexpected 00:09:41.586 Unknown (12h): Supported 00:09:41.586 Copy (19h): Supported LBA-Change 00:09:41.586 Unknown (1Dh): Supported LBA-Change 00:09:41.586 00:09:41.586 Error Log 00:09:41.586 ========= 00:09:41.586 00:09:41.586 Arbitration 00:09:41.586 =========== 00:09:41.586 Arbitration Burst: no limit 00:09:41.586 00:09:41.586 Power Management 00:09:41.586 ================ 00:09:41.586 Number of Power States: 1 00:09:41.586 Current Power State: Power State #0 00:09:41.586 Power State #0: 00:09:41.586 Max Power: 25.00 W 00:09:41.586 Non-Operational State: Operational 00:09:41.586 Entry Latency: 16 microseconds 00:09:41.586 Exit Latency: 4 microseconds 00:09:41.586 Relative Read Throughput: 0 00:09:41.586 Relative Read Latency: 0 00:09:41.586 Relative Write Throughput: 0 00:09:41.586 Relative Write Latency: 0 00:09:41.586 Idle Power: Not Reported 00:09:41.586 Active Power: Not Reported 00:09:41.586 Non-Operational Permissive Mode: Not Supported 00:09:41.586 00:09:41.586 Health Information 00:09:41.586 ================== 00:09:41.586 Critical Warnings: 00:09:41.586 Available Spare Space: OK 00:09:41.586 Temperature: OK 00:09:41.586 Device Reliability: OK 00:09:41.586 Read Only: No 00:09:41.586 Volatile Memory Backup: OK 00:09:41.586 Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.586 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:41.586 Available Spare: 0% 00:09:41.586 Available Spare Threshold: 0% 00:09:41.586 Life Percentage Used: 0% 00:09:41.586 Data Units Read: 1881 00:09:41.586 Data Units Written: 871 00:09:41.586 Host Read Commands: 96920 00:09:41.586 Host Write Commands: 48279 00:09:41.586 Controller Busy Time: 0 minutes 00:09:41.586 Power Cycles: 0 00:09:41.586 Power On Hours: 0 hours 00:09:41.586 Unsafe Shutdowns: 0 00:09:41.586 Unrecoverable Media Errors: 0 00:09:41.586 Lifetime Error Log Entries: 0 00:09:41.586 Warning Temperature Time: 0 minutes 00:09:41.586 Critical Temperature Time: 0 minutes 00:09:41.586 00:09:41.586 Number of Queues 00:09:41.586 ================ 00:09:41.586 Number of I/O Submission Queues: 64 00:09:41.586 Number of I/O Completion Queues: 64 00:09:41.586 00:09:41.586 ZNS Specific Controller Data 00:09:41.586 ============================ 00:09:41.586 Zone Append Size Limit: 0 00:09:41.586 00:09:41.586 00:09:41.586 Active Namespaces 00:09:41.586 ================= 00:09:41.586 Namespace ID:1 00:09:41.586 Error Recovery Timeout: Unlimited 00:09:41.586 Command Set Identifier: NVM (00h) 00:09:41.586 Deallocate: Supported 00:09:41.586 Deallocated/Unwritten Error: Supported 00:09:41.586 Deallocated Read Value: All 0x00 00:09:41.586 Deallocate in Write Zeroes: Not Supported 00:09:41.586 Deallocated Guard Field: 0xFFFF 00:09:41.586 Flush: Supported 00:09:41.586 Reservation: Not Supported 00:09:41.586 Metadata Transferred as: Separate Metadata Buffer 00:09:41.586 Namespace Sharing Capabilities: Private 00:09:41.586 Size (in LBAs): 1548666 (5GiB) 00:09:41.586 Capacity (in LBAs): 1548666 (5GiB) 00:09:41.586 Utilization (in LBAs): 1548666 (5GiB) 00:09:41.586 Thin Provisioning: Not Supported 00:09:41.586 Per-NS Atomic Units: No 00:09:41.586 Maximum Single Source Range Length: 128 00:09:41.586 Maximum Copy Length: 128 00:09:41.586 Maximum Source Range Count: 128 00:09:41.586 NGUID/EUI64 Never Reused: No 00:09:41.586 Namespace Write Protected: No 00:09:41.586 Number of LBA Formats: 8 00:09:41.586 Current LBA Format: LBA Format #07 00:09:41.586 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.586 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.586 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.586 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.586 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.586 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.586 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.586 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.586 00:09:41.586 ===================================================== 00:09:41.586 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:41.586 ===================================================== 00:09:41.586 Controller Capabilities/Features 00:09:41.586 ================================ 00:09:41.586 Vendor ID: 1b36 00:09:41.586 Subsystem Vendor ID: 1af4 00:09:41.586 Serial Number: 12341 00:09:41.586 Model Number: QEMU NVMe Ctrl 00:09:41.586 Firmware Version: 8.0.0 00:09:41.586 Recommended Arb Burst: 6 00:09:41.586 IEEE OUI Identifier: 00 54 52 00:09:41.586 Multi-path I/O 00:09:41.586 May have multiple subsystem ports: No 00:09:41.586 May have multiple controllers: No 00:09:41.586 Associated with SR-IOV VF: No 00:09:41.586 Max Data Transfer Size: 524288 00:09:41.586 Max Number of Namespaces: 256 00:09:41.586 Max Number of I/O Queues: 64 00:09:41.586 NVMe Specification Version (VS): 1.4 00:09:41.586 NVMe Specification Version (Identify): 1.4 00:09:41.586 Maximum Queue Entries: 2048 00:09:41.586 Contiguous Queues Required: Yes 00:09:41.586 Arbitration Mechanisms Supported 00:09:41.586 Weighted Round Robin: Not Supported 00:09:41.586 Vendor Specific: Not Supported 00:09:41.586 Reset Timeout: 7500 ms 00:09:41.586 Doorbell Stride: 4 bytes 00:09:41.586 NVM Subsystem Reset: Not Supported 00:09:41.586 Command Sets Supported 00:09:41.586 NVM Command Set: Supported 00:09:41.586 Boot Partition: Not Supported 00:09:41.586 Memory Page Size Minimum: 4096 bytes 00:09:41.586 Memory Page Size Maximum: 65536 bytes 00:09:41.586 Persistent Memory Region: Not Supported 00:09:41.586 Optional Asynchronous Events Supported 00:09:41.586 Namespace Attribute Notices: Supported 00:09:41.586 Firmware Activation Notices: Not Supported 00:09:41.586 ANA Change Notices: Not Supported 00:09:41.586 PLE Aggregate Log Change Notices: Not Supported 00:09:41.586 LBA Status Info Alert Notices: Not Supported 00:09:41.586 EGE Aggregate Log Change Notices: Not Supported 00:09:41.586 Normal NVM Subsystem Shutdown event: Not Supported 00:09:41.586 Zone Descriptor Change Notices: Not Supported 00:09:41.586 Discovery Log Change Notices: Not Supported 00:09:41.586 Controller Attributes 00:09:41.586 128-bit Host Identifier: Not Supported 00:09:41.586 Non-Operational Permissive Mode: Not Supported 00:09:41.586 NVM Sets: Not Supported 00:09:41.586 Read Recovery Levels: Not Supported 00:09:41.586 Endurance Groups: Not Supported 00:09:41.586 Predictable Latency Mode: Not Supported 00:09:41.586 Traffic Based Keep ALive: Not Supported 00:09:41.586 Namespace Granularity: Not Supported 00:09:41.586 SQ Associations: Not Supported 00:09:41.586 UUID List: Not Supported 00:09:41.586 Multi-Domain Subsystem: Not Supported 00:09:41.586 Fixed Capacity Management: Not Supported 00:09:41.586 Variable Capacity Management: Not Supported 00:09:41.586 Delete Endurance Group: Not Supported 00:09:41.586 Delete NVM Set: Not Supported 00:09:41.586 Extended LBA Formats Supported: Supported 00:09:41.586 Flexible Data Placement Supported: Not Supported 00:09:41.586 00:09:41.586 Controller Memory Buffer Support 00:09:41.586 ================================ 00:09:41.586 Supported: No 00:09:41.586 00:09:41.586 Persistent Memory Region Support 00:09:41.586 ================================ 00:09:41.586 Supported: No 00:09:41.586 00:09:41.586 Admin Command Set Attributes 00:09:41.586 ============================ 00:09:41.586 Security Send/Receive: Not Supported 00:09:41.586 Format NVM: Supported 00:09:41.586 Firmware Activate/Download: Not Supported 00:09:41.586 Namespace Management: Supported 00:09:41.586 Device Self-Test: Not Supported 00:09:41.586 Directives: Supported 00:09:41.586 NVMe-MI: Not Supported 00:09:41.586 Virtualization Management: Not Supported 00:09:41.586 Doorbell Buffer Config: Supported 00:09:41.586 Get LBA Status Capability: Not Supported 00:09:41.586 Command & Feature Lockdown Capability: Not Supported 00:09:41.586 Abort Command Limit: 4 00:09:41.586 Async Event Request Limit: 4 00:09:41.586 Number of Firmware Slots: N/A 00:09:41.586 Firmware Slot 1 Read-Only: N/A 00:09:41.586 Firmware Activation Without Reset: N/A 00:09:41.586 Multiple Update Detection Support: N/A 00:09:41.586 Firmware Update Granularity: No Information Provided 00:09:41.586 Per-Namespace SMART Log: Yes 00:09:41.586 Asymmetric Namespace Access Log Page: Not Supported 00:09:41.587 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:41.587 Command Effects Log Page: Supported 00:09:41.587 Get Log Page Extended Data: Supported 00:09:41.587 Telemetry Log Pages: Not Supported 00:09:41.587 Persistent Event Log Pages: Not Supported 00:09:41.587 Supported Log Pages Log Page: May Support 00:09:41.587 Commands Supported & Effects Log Page: Not Supported 00:09:41.587 Feature Identifiers & Effects Log Page:May Support 00:09:41.587 NVMe-MI Commands & Effects Log Page: May Support 00:09:41.587 Data Area 4 for Telemetry Log: Not Supported 00:09:41.587 Error Log Page Entries Supported: 1 00:09:41.587 Keep Alive: Not Supported 00:09:41.587 00:09:41.587 NVM Command Set Attributes 00:09:41.587 ========================== 00:09:41.587 Submission Queue Entry Size 00:09:41.587 Max: 64 00:09:41.587 Min: 64 00:09:41.587 Completion Queue Entry Size 00:09:41.587 Max: 16 00:09:41.587 Min: 16 00:09:41.587 Number of Namespaces: 256 00:09:41.587 Compare Command: Supported 00:09:41.587 Write Uncorrectable Command: Not Supported 00:09:41.587 Dataset Management Command: Supported 00:09:41.587 Write Zeroes Command: Supported 00:09:41.587 Set Features Save Field: Supported 00:09:41.587 Reservations: Not Supported 00:09:41.587 Timestamp: Supported 00:09:41.587 Copy: Supported 00:09:41.587 Volatile Write Cache: Present 00:09:41.587 Atomic Write Unit (Normal): 1 00:09:41.587 Atomic Write Unit (PFail): 1 00:09:41.587 Atomic Compare & Write Unit: 1 00:09:41.587 Fused Compare & Write: Not Supported 00:09:41.587 Scatter-Gather List 00:09:41.587 SGL Command Set: Supported 00:09:41.587 SGL Keyed: Not Supported 00:09:41.587 SGL Bit Bucket Descriptor: Not Supported 00:09:41.587 SGL Metadata Pointer: Not Supported 00:09:41.587 Oversized SGL: Not Supported 00:09:41.587 SGL Metadata Address: Not Supported 00:09:41.587 SGL Offset: Not Supported 00:09:41.587 Transport SGL Data Block: Not Supported 00:09:41.587 Replay Protected Memory Block: Not Supported 00:09:41.587 00:09:41.587 Firmware Slot Information 00:09:41.587 ========================= 00:09:41.587 Active slot: 1 00:09:41.587 Slot 1 Firmware Revision: 1.0 00:09:41.587 00:09:41.587 00:09:41.587 Commands Supported and Effects 00:09:41.587 ============================== 00:09:41.587 Admin Commands 00:09:41.587 -------------- 00:09:41.587 Delete I/O Submission Queue (00h): Supported 00:09:41.587 Create I/O Submission Queue (01h): Supported 00:09:41.587 Get Log Page (02h): Supported 00:09:41.587 Delete I/O Completion Queue (04h): Supported 00:09:41.587 Create I/O Completion Queue (05h): Supported 00:09:41.587 Identify (06h): Supported 00:09:41.587 Abort (08h): Supported 00:09:41.587 Set Features (09h): Supported 00:09:41.587 Get Features (0Ah): Supported 00:09:41.587 Asynchronous Event Request (0Ch): Supported 00:09:41.587 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:41.587 Directive Send (19h): Supported 00:09:41.587 Directive Receive (1Ah): Supported 00:09:41.587 Virtualization Management (1Ch): Supported 00:09:41.587 Doorbell Buffer Config (7Ch): Supported 00:09:41.587 Format NVM (80h): Supported LBA-Change 00:09:41.587 I/O Commands 00:09:41.587 ------------ 00:09:41.587 Flush (00h): Supported LBA-Change 00:09:41.587 Write (01h): Supported LBA-Change 00:09:41.587 Read (02h): Supported 00:09:41.587 Compare (05h): Supported 00:09:41.587 Write Zeroes (08h): Supported LBA-Change 00:09:41.587 Dataset Management (09h): Supported LBA-Change 00:09:41.587 Unknown (0Ch): Supported 00:09:41.587 Unknown (12h): Supported 00:09:41.587 Copy (19h): Supported LBA-Change 00:09:41.587 Unknown (1Dh): Supported LBA-Change 00:09:41.587 00:09:41.587 Error Log 00:09:41.587 ========= 00:09:41.587 00:09:41.587 Arbitration 00:09:41.587 =========== 00:09:41.587 Arbitration Burst: no limit 00:09:41.587 00:09:41.587 Power Management 00:09:41.587 ================ 00:09:41.587 Number of Power States: 1 00:09:41.587 Current Power State: Power State #0 00:09:41.587 Power State #0: 00:09:41.587 Max Power: 25.00 W 00:09:41.587 Non-Operational State: Operational 00:09:41.587 Entry Latency: 16 microseconds 00:09:41.587 Exit Latency: 4 microseconds 00:09:41.587 Relative Read Throughput: 0 00:09:41.587 Relative Read Latency: 0 00:09:41.587 Relative Write Throughput: 0 00:09:41.587 Relative Write Latency: 0 00:09:41.587 Idle Power: Not Reported 00:09:41.587 Active Power: Not Reported 00:09:41.587 Non-Operational Permissive Mode: Not Supported 00:09:41.587 00:09:41.587 Health Information 00:09:41.587 ================== 00:09:41.587 Critical Warnings: 00:09:41.587 Available Spare Space: OK 00:09:41.587 Temperature: OK 00:09:41.587 Device Reliability: OK 00:09:41.587 Read Only: No 00:09:41.587 Volatile Memory Backup: OK 00:09:41.587 Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.587 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:41.587 Available Spare: 0% 00:09:41.587 Available Spare Threshold: 0% 00:09:41.587 Life Percentage Used: 0% 00:09:41.587 Data Units Read: 1294 00:09:41.587 Data Units Written: [2024-11-29 15:51:52.942620] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 63373 terminated unexpected 00:09:41.587 601 00:09:41.587 Host Read Commands: 65172 00:09:41.587 Host Write Commands: 32160 00:09:41.587 Controller Busy Time: 0 minutes 00:09:41.587 Power Cycles: 0 00:09:41.587 Power On Hours: 0 hours 00:09:41.587 Unsafe Shutdowns: 0 00:09:41.587 Unrecoverable Media Errors: 0 00:09:41.587 Lifetime Error Log Entries: 0 00:09:41.587 Warning Temperature Time: 0 minutes 00:09:41.587 Critical Temperature Time: 0 minutes 00:09:41.587 00:09:41.587 Number of Queues 00:09:41.587 ================ 00:09:41.587 Number of I/O Submission Queues: 64 00:09:41.587 Number of I/O Completion Queues: 64 00:09:41.587 00:09:41.587 ZNS Specific Controller Data 00:09:41.587 ============================ 00:09:41.587 Zone Append Size Limit: 0 00:09:41.587 00:09:41.587 00:09:41.587 Active Namespaces 00:09:41.587 ================= 00:09:41.587 Namespace ID:1 00:09:41.587 Error Recovery Timeout: Unlimited 00:09:41.587 Command Set Identifier: NVM (00h) 00:09:41.587 Deallocate: Supported 00:09:41.587 Deallocated/Unwritten Error: Supported 00:09:41.587 Deallocated Read Value: All 0x00 00:09:41.587 Deallocate in Write Zeroes: Not Supported 00:09:41.587 Deallocated Guard Field: 0xFFFF 00:09:41.587 Flush: Supported 00:09:41.587 Reservation: Not Supported 00:09:41.587 Namespace Sharing Capabilities: Private 00:09:41.587 Size (in LBAs): 1310720 (5GiB) 00:09:41.587 Capacity (in LBAs): 1310720 (5GiB) 00:09:41.587 Utilization (in LBAs): 1310720 (5GiB) 00:09:41.587 Thin Provisioning: Not Supported 00:09:41.587 Per-NS Atomic Units: No 00:09:41.587 Maximum Single Source Range Length: 128 00:09:41.587 Maximum Copy Length: 128 00:09:41.587 Maximum Source Range Count: 128 00:09:41.587 NGUID/EUI64 Never Reused: No 00:09:41.587 Namespace Write Protected: No 00:09:41.587 Number of LBA Formats: 8 00:09:41.587 Current LBA Format: LBA Format #04 00:09:41.587 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.587 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.587 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.587 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.587 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.587 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.587 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.587 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.587 00:09:41.587 ===================================================== 00:09:41.587 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:41.587 ===================================================== 00:09:41.587 Controller Capabilities/Features 00:09:41.587 ================================ 00:09:41.587 Vendor ID: 1b36 00:09:41.587 Subsystem Vendor ID: 1af4 00:09:41.587 Serial Number: 12342 00:09:41.587 Model Number: QEMU NVMe Ctrl 00:09:41.587 Firmware Version: 8.0.0 00:09:41.587 Recommended Arb Burst: 6 00:09:41.587 IEEE OUI Identifier: 00 54 52 00:09:41.587 Multi-path I/O 00:09:41.587 May have multiple subsystem ports: No 00:09:41.587 May have multiple controllers: No 00:09:41.587 Associated with SR-IOV VF: No 00:09:41.587 Max Data Transfer Size: 524288 00:09:41.587 Max Number of Namespaces: 256 00:09:41.587 Max Number of I/O Queues: 64 00:09:41.587 NVMe Specification Version (VS): 1.4 00:09:41.587 NVMe Specification Version (Identify): 1.4 00:09:41.587 Maximum Queue Entries: 2048 00:09:41.587 Contiguous Queues Required: Yes 00:09:41.587 Arbitration Mechanisms Supported 00:09:41.588 Weighted Round Robin: Not Supported 00:09:41.588 Vendor Specific: Not Supported 00:09:41.588 Reset Timeout: 7500 ms 00:09:41.588 Doorbell Stride: 4 bytes 00:09:41.588 NVM Subsystem Reset: Not Supported 00:09:41.588 Command Sets Supported 00:09:41.588 NVM Command Set: Supported 00:09:41.588 Boot Partition: Not Supported 00:09:41.588 Memory Page Size Minimum: 4096 bytes 00:09:41.588 Memory Page Size Maximum: 65536 bytes 00:09:41.588 Persistent Memory Region: Not Supported 00:09:41.588 Optional Asynchronous Events Supported 00:09:41.588 Namespace Attribute Notices: Supported 00:09:41.588 Firmware Activation Notices: Not Supported 00:09:41.588 ANA Change Notices: Not Supported 00:09:41.588 PLE Aggregate Log Change Notices: Not Supported 00:09:41.588 LBA Status Info Alert Notices: Not Supported 00:09:41.588 EGE Aggregate Log Change Notices: Not Supported 00:09:41.588 Normal NVM Subsystem Shutdown event: Not Supported 00:09:41.588 Zone Descriptor Change Notices: Not Supported 00:09:41.588 Discovery Log Change Notices: Not Supported 00:09:41.588 Controller Attributes 00:09:41.588 128-bit Host Identifier: Not Supported 00:09:41.588 Non-Operational Permissive Mode: Not Supported 00:09:41.588 NVM Sets: Not Supported 00:09:41.588 Read Recovery Levels: Not Supported 00:09:41.588 Endurance Groups: Not Supported 00:09:41.588 Predictable Latency Mode: Not Supported 00:09:41.588 Traffic Based Keep ALive: Not Supported 00:09:41.588 Namespace Granularity: Not Supported 00:09:41.588 SQ Associations: Not Supported 00:09:41.588 UUID List: Not Supported 00:09:41.588 Multi-Domain Subsystem: Not Supported 00:09:41.588 Fixed Capacity Management: Not Supported 00:09:41.588 Variable Capacity Management: Not Supported 00:09:41.588 Delete Endurance Group: Not Supported 00:09:41.588 Delete NVM Set: Not Supported 00:09:41.588 Extended LBA Formats Supported: Supported 00:09:41.588 Flexible Data Placement Supported: Not Supported 00:09:41.588 00:09:41.588 Controller Memory Buffer Support 00:09:41.588 ================================ 00:09:41.588 Supported: No 00:09:41.588 00:09:41.588 Persistent Memory Region Support 00:09:41.588 ================================ 00:09:41.588 Supported: No 00:09:41.588 00:09:41.588 Admin Command Set Attributes 00:09:41.588 ============================ 00:09:41.588 Security Send/Receive: Not Supported 00:09:41.588 Format NVM: Supported 00:09:41.588 Firmware Activate/Download: Not Supported 00:09:41.588 Namespace Management: Supported 00:09:41.588 Device Self-Test: Not Supported 00:09:41.588 Directives: Supported 00:09:41.588 NVMe-MI: Not Supported 00:09:41.588 Virtualization Management: Not Supported 00:09:41.588 Doorbell Buffer Config: Supported 00:09:41.588 Get LBA Status Capability: Not Supported 00:09:41.588 Command & Feature Lockdown Capability: Not Supported 00:09:41.588 Abort Command Limit: 4 00:09:41.588 Async Event Request Limit: 4 00:09:41.588 Number of Firmware Slots: N/A 00:09:41.588 Firmware Slot 1 Read-Only: N/A 00:09:41.588 Firmware Activation Without Reset: N/A 00:09:41.588 Multiple Update Detection Support: N/A 00:09:41.588 Firmware Update Granularity: No Information Provided 00:09:41.588 Per-Namespace SMART Log: Yes 00:09:41.588 Asymmetric Namespace Access Log Page: Not Supported 00:09:41.588 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:41.588 Command Effects Log Page: Supported 00:09:41.588 Get Log Page Extended Data: Supported 00:09:41.588 Telemetry Log Pages: Not Supported 00:09:41.588 Persistent Event Log Pages: Not Supported 00:09:41.588 Supported Log Pages Log Page: May Support 00:09:41.588 Commands Supported & Effects Log Page: Not Supported 00:09:41.588 Feature Identifiers & Effects Log Page:May Support 00:09:41.588 NVMe-MI Commands & Effects Log Page: May Support 00:09:41.588 Data Area 4 for Telemetry Log: Not Supported 00:09:41.588 Error Log Page Entries Supported: 1 00:09:41.588 Keep Alive: Not Supported 00:09:41.588 00:09:41.588 NVM Command Set Attributes 00:09:41.588 ========================== 00:09:41.588 Submission Queue Entry Size 00:09:41.588 Max: 64 00:09:41.588 Min: 64 00:09:41.588 Completion Queue Entry Size 00:09:41.588 Max: 16 00:09:41.588 Min: 16 00:09:41.588 Number of Namespaces: 256 00:09:41.588 Compare Command: Supported 00:09:41.588 Write Uncorrectable Command: Not Supported 00:09:41.588 Dataset Management Command: Supported 00:09:41.588 Write Zeroes Command: Supported 00:09:41.588 Set Features Save Field: Supported 00:09:41.588 Reservations: Not Supported 00:09:41.588 Timestamp: Supported 00:09:41.588 Copy: Supported 00:09:41.588 Volatile Write Cache: Present 00:09:41.588 Atomic Write Unit (Normal): 1 00:09:41.588 Atomic Write Unit (PFail): 1 00:09:41.588 Atomic Compare & Write Unit: 1 00:09:41.588 Fused Compare & Write: Not Supported 00:09:41.588 Scatter-Gather List 00:09:41.588 SGL Command Set: Supported 00:09:41.588 SGL Keyed: Not Supported 00:09:41.588 SGL Bit Bucket Descriptor: Not Supported 00:09:41.588 SGL Metadata Pointer: Not Supported 00:09:41.588 Oversized SGL: Not Supported 00:09:41.588 SGL Metadata Address: Not Supported 00:09:41.588 SGL Offset: Not Supported 00:09:41.588 Transport SGL Data Block: Not Supported 00:09:41.588 Replay Protected Memory Block: Not Supported 00:09:41.588 00:09:41.588 Firmware Slot Information 00:09:41.588 ========================= 00:09:41.588 Active slot: 1 00:09:41.588 Slot 1 Firmware Revision: 1.0 00:09:41.588 00:09:41.588 00:09:41.588 Commands Supported and Effects 00:09:41.588 ============================== 00:09:41.588 Admin Commands 00:09:41.588 -------------- 00:09:41.588 Delete I/O Submission Queue (00h): Supported 00:09:41.588 Create I/O Submission Queue (01h): Supported 00:09:41.588 Get Log Page (02h): Supported 00:09:41.588 Delete I/O Completion Queue (04h): Supported 00:09:41.588 Create I/O Completion Queue (05h): Supported 00:09:41.588 Identify (06h): Supported 00:09:41.588 Abort (08h): Supported 00:09:41.588 Set Features (09h): Supported 00:09:41.588 Get Features (0Ah): Supported 00:09:41.588 Asynchronous Event Request (0Ch): Supported 00:09:41.588 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:41.588 Directive Send (19h): Supported 00:09:41.588 Directive Receive (1Ah): Supported 00:09:41.588 Virtualization Management (1Ch): Supported 00:09:41.588 Doorbell Buffer Config (7Ch): Supported 00:09:41.588 Format NVM (80h): Supported LBA-Change 00:09:41.588 I/O Commands 00:09:41.588 ------------ 00:09:41.588 Flush (00h): Supported LBA-Change 00:09:41.588 Write (01h): Supported LBA-Change 00:09:41.588 Read (02h): Supported 00:09:41.588 Compare (05h): Supported 00:09:41.588 Write Zeroes (08h): Supported LBA-Change 00:09:41.588 Dataset Management (09h): Supported LBA-Change 00:09:41.588 Unknown (0Ch): Supported 00:09:41.588 Unknown (12h): Supported 00:09:41.588 Copy (19h): Supported LBA-Change 00:09:41.588 Unknown (1Dh): Supported LBA-Change 00:09:41.588 00:09:41.588 Error Log 00:09:41.588 ========= 00:09:41.588 00:09:41.588 Arbitration 00:09:41.588 =========== 00:09:41.588 Arbitration Burst: no limit 00:09:41.588 00:09:41.588 Power Management 00:09:41.588 ================ 00:09:41.588 Number of Power States: 1 00:09:41.588 Current Power State: Power State #0 00:09:41.588 Power State #0: 00:09:41.588 Max Power: 25.00 W 00:09:41.588 Non-Operational State: Operational 00:09:41.589 Entry Latency: 16 microseconds 00:09:41.589 Exit Latency: 4 microseconds 00:09:41.589 Relative Read Throughput: 0 00:09:41.589 Relative Read Latency: 0 00:09:41.589 Relative Write Throughput: 0 00:09:41.589 Relative Write Latency: 0 00:09:41.589 Idle Power: Not Reported 00:09:41.589 Active Power: Not Reported 00:09:41.589 Non-Operational Permissive Mode: Not Supported 00:09:41.589 00:09:41.589 Health Information 00:09:41.589 ================== 00:09:41.589 Critical Warnings: 00:09:41.589 Available Spare Space: OK 00:09:41.589 Temperature: OK 00:09:41.589 Device Reliability: OK 00:09:41.589 Read Only: No 00:09:41.589 Volatile Memory Backup: OK 00:09:41.589 Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.589 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:41.589 Available Spare: 0% 00:09:41.589 Available Spare Threshold: 0% 00:09:41.589 Life Percentage Used: 0% 00:09:41.589 Data Units Read: 4051 00:09:41.589 Data Units Written: 1873 00:09:41.589 Host Read Commands: 197498 00:09:41.589 Host Write Commands: 97234 00:09:41.589 Controller Busy Time: 0 minutes 00:09:41.589 Power Cycles: 0 00:09:41.589 Power On Hours: 0 hours 00:09:41.589 Unsafe Shutdowns: 0 00:09:41.589 Unrecoverable Media Errors: 0 00:09:41.589 Lifetime Error Log Entries: 0 00:09:41.589 Warning Temperature Time: 0 minutes 00:09:41.589 Critical Temperature Time: 0 minutes 00:09:41.589 00:09:41.589 Number of Queues 00:09:41.589 ================ 00:09:41.589 Number of I/O Submission Queues: 64 00:09:41.589 Number of I/O Completion Queues: 64 00:09:41.589 00:09:41.589 ZNS Specific Controller Data 00:09:41.589 ============================ 00:09:41.589 Zone Append Size Limit: 0 00:09:41.589 00:09:41.589 00:09:41.589 Active Namespaces 00:09:41.589 ================= 00:09:41.589 Namespace ID:1 00:09:41.589 Error Recovery Timeout: Unlimited 00:09:41.589 Command Set Identifier: NVM (00h) 00:09:41.589 Deallocate: Supported 00:09:41.589 Deallocated/Unwritten Error: Supported 00:09:41.589 Deallocated Read Value: All 0x00 00:09:41.589 Deallocate in Write Zeroes: Not Supported 00:09:41.589 Deallocated Guard Field: 0xFFFF 00:09:41.589 Flush: Supported 00:09:41.589 Reservation: Not Supported 00:09:41.589 Namespace Sharing Capabilities: Private 00:09:41.589 Size (in LBAs): 1048576 (4GiB) 00:09:41.589 Capacity (in LBAs): 1048576 (4GiB) 00:09:41.589 Utilization (in LBAs): 1048576 (4GiB) 00:09:41.589 Thin Provisioning: Not Supported 00:09:41.589 Per-NS Atomic Units: No 00:09:41.589 Maximum Single Source Range Length: 128 00:09:41.589 Maximum Copy Length: 128 00:09:41.589 Maximum Source Range Count: 128 00:09:41.589 NGUID/EUI64 Never Reused: No 00:09:41.589 Namespace Write Protected: No 00:09:41.589 Number of LBA Formats: 8 00:09:41.589 Current LBA Format: LBA Format #04 00:09:41.589 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.589 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.589 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.589 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.589 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.589 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.589 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.589 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.589 00:09:41.589 Namespace ID:2 00:09:41.589 Error Recovery Timeout: Unlimited 00:09:41.589 Command Set Identifier: NVM (00h) 00:09:41.589 Deallocate: Supported 00:09:41.589 Deallocated/Unwritten Error: Supported 00:09:41.589 Deallocated Read Value: All 0x00 00:09:41.589 Deallocate in Write Zeroes: Not Supported 00:09:41.589 Deallocated Guard Field: 0xFFFF 00:09:41.589 Flush: Supported 00:09:41.589 Reservation: Not Supported 00:09:41.589 Namespace Sharing Capabilities: Private 00:09:41.589 Size (in LBAs): 1048576 (4GiB) 00:09:41.589 Capacity (in LBAs): 1048576 (4GiB) 00:09:41.589 Utilization (in LBAs): 1048576 (4GiB) 00:09:41.589 Thin Provisioning: Not Supported 00:09:41.589 Per-NS Atomic Units: No 00:09:41.589 Maximum Single Source Range Length: 128 00:09:41.589 Maximum Copy Length: 128 00:09:41.589 Maximum Source Range Count: 128 00:09:41.589 NGUID/EUI64 Never Reused: No 00:09:41.589 Namespace Write Protected: No 00:09:41.589 Number of LBA Formats: 8 00:09:41.589 Current LBA Format: LBA Format #04 00:09:41.589 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.589 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.589 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.589 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.589 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.589 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.589 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.589 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.589 00:09:41.589 Namespace ID:3 00:09:41.589 Error Recovery Timeout: Unlimited 00:09:41.589 Command Set Identifier: NVM (00h) 00:09:41.589 Deallocate: Supported 00:09:41.589 Deallocated/Unwritten Error: Supported 00:09:41.589 Deallocated Read Value: All 0x00 00:09:41.589 Deallocate in Write Zeroes: Not Supported 00:09:41.589 Deallocated Guard Field: 0xFFFF 00:09:41.589 Flush: Supported 00:09:41.589 Reservation: Not Supported 00:09:41.589 Namespace Sharing Capabilities: Private 00:09:41.589 Size (in LBAs): 1048576 (4GiB) 00:09:41.589 Capacity (in LBAs): 1048576 (4GiB) 00:09:41.589 Utilization (in LBAs): 1048576 (4GiB) 00:09:41.589 Thin Provisioning: Not Supported 00:09:41.589 Per-NS Atomic Units: No 00:09:41.589 Maximum Single Source Range Length: 128 00:09:41.589 Maximum Copy Length: 128 00:09:41.589 Maximum Source Range Count: 128 00:09:41.589 NGUID/EUI64 Never Reused: No 00:09:41.589 Namespace Write Protected: No 00:09:41.589 Number of LBA Formats: 8 00:09:41.589 Current LBA Format: LBA Format #04 00:09:41.589 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.589 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.589 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.589 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.589 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.589 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.589 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.589 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.589 00:09:41.589 15:51:52 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:41.589 15:51:52 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:09:41.851 ===================================================== 00:09:41.851 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:41.851 ===================================================== 00:09:41.851 Controller Capabilities/Features 00:09:41.851 ================================ 00:09:41.851 Vendor ID: 1b36 00:09:41.851 Subsystem Vendor ID: 1af4 00:09:41.851 Serial Number: 12340 00:09:41.851 Model Number: QEMU NVMe Ctrl 00:09:41.851 Firmware Version: 8.0.0 00:09:41.851 Recommended Arb Burst: 6 00:09:41.851 IEEE OUI Identifier: 00 54 52 00:09:41.851 Multi-path I/O 00:09:41.851 May have multiple subsystem ports: No 00:09:41.851 May have multiple controllers: No 00:09:41.851 Associated with SR-IOV VF: No 00:09:41.851 Max Data Transfer Size: 524288 00:09:41.851 Max Number of Namespaces: 256 00:09:41.851 Max Number of I/O Queues: 64 00:09:41.851 NVMe Specification Version (VS): 1.4 00:09:41.851 NVMe Specification Version (Identify): 1.4 00:09:41.851 Maximum Queue Entries: 2048 00:09:41.851 Contiguous Queues Required: Yes 00:09:41.851 Arbitration Mechanisms Supported 00:09:41.851 Weighted Round Robin: Not Supported 00:09:41.851 Vendor Specific: Not Supported 00:09:41.851 Reset Timeout: 7500 ms 00:09:41.851 Doorbell Stride: 4 bytes 00:09:41.851 NVM Subsystem Reset: Not Supported 00:09:41.851 Command Sets Supported 00:09:41.851 NVM Command Set: Supported 00:09:41.851 Boot Partition: Not Supported 00:09:41.851 Memory Page Size Minimum: 4096 bytes 00:09:41.851 Memory Page Size Maximum: 65536 bytes 00:09:41.851 Persistent Memory Region: Not Supported 00:09:41.851 Optional Asynchronous Events Supported 00:09:41.851 Namespace Attribute Notices: Supported 00:09:41.851 Firmware Activation Notices: Not Supported 00:09:41.851 ANA Change Notices: Not Supported 00:09:41.851 PLE Aggregate Log Change Notices: Not Supported 00:09:41.851 LBA Status Info Alert Notices: Not Supported 00:09:41.851 EGE Aggregate Log Change Notices: Not Supported 00:09:41.851 Normal NVM Subsystem Shutdown event: Not Supported 00:09:41.851 Zone Descriptor Change Notices: Not Supported 00:09:41.851 Discovery Log Change Notices: Not Supported 00:09:41.851 Controller Attributes 00:09:41.851 128-bit Host Identifier: Not Supported 00:09:41.851 Non-Operational Permissive Mode: Not Supported 00:09:41.851 NVM Sets: Not Supported 00:09:41.851 Read Recovery Levels: Not Supported 00:09:41.851 Endurance Groups: Not Supported 00:09:41.851 Predictable Latency Mode: Not Supported 00:09:41.851 Traffic Based Keep ALive: Not Supported 00:09:41.851 Namespace Granularity: Not Supported 00:09:41.851 SQ Associations: Not Supported 00:09:41.851 UUID List: Not Supported 00:09:41.851 Multi-Domain Subsystem: Not Supported 00:09:41.851 Fixed Capacity Management: Not Supported 00:09:41.851 Variable Capacity Management: Not Supported 00:09:41.851 Delete Endurance Group: Not Supported 00:09:41.851 Delete NVM Set: Not Supported 00:09:41.851 Extended LBA Formats Supported: Supported 00:09:41.851 Flexible Data Placement Supported: Not Supported 00:09:41.851 00:09:41.851 Controller Memory Buffer Support 00:09:41.851 ================================ 00:09:41.851 Supported: No 00:09:41.851 00:09:41.851 Persistent Memory Region Support 00:09:41.851 ================================ 00:09:41.851 Supported: No 00:09:41.851 00:09:41.851 Admin Command Set Attributes 00:09:41.851 ============================ 00:09:41.851 Security Send/Receive: Not Supported 00:09:41.851 Format NVM: Supported 00:09:41.851 Firmware Activate/Download: Not Supported 00:09:41.851 Namespace Management: Supported 00:09:41.851 Device Self-Test: Not Supported 00:09:41.851 Directives: Supported 00:09:41.851 NVMe-MI: Not Supported 00:09:41.851 Virtualization Management: Not Supported 00:09:41.851 Doorbell Buffer Config: Supported 00:09:41.851 Get LBA Status Capability: Not Supported 00:09:41.851 Command & Feature Lockdown Capability: Not Supported 00:09:41.851 Abort Command Limit: 4 00:09:41.851 Async Event Request Limit: 4 00:09:41.851 Number of Firmware Slots: N/A 00:09:41.851 Firmware Slot 1 Read-Only: N/A 00:09:41.851 Firmware Activation Without Reset: N/A 00:09:41.851 Multiple Update Detection Support: N/A 00:09:41.851 Firmware Update Granularity: No Information Provided 00:09:41.851 Per-Namespace SMART Log: Yes 00:09:41.851 Asymmetric Namespace Access Log Page: Not Supported 00:09:41.851 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:41.851 Command Effects Log Page: Supported 00:09:41.851 Get Log Page Extended Data: Supported 00:09:41.852 Telemetry Log Pages: Not Supported 00:09:41.852 Persistent Event Log Pages: Not Supported 00:09:41.852 Supported Log Pages Log Page: May Support 00:09:41.852 Commands Supported & Effects Log Page: Not Supported 00:09:41.852 Feature Identifiers & Effects Log Page:May Support 00:09:41.852 NVMe-MI Commands & Effects Log Page: May Support 00:09:41.852 Data Area 4 for Telemetry Log: Not Supported 00:09:41.852 Error Log Page Entries Supported: 1 00:09:41.852 Keep Alive: Not Supported 00:09:41.852 00:09:41.852 NVM Command Set Attributes 00:09:41.852 ========================== 00:09:41.852 Submission Queue Entry Size 00:09:41.852 Max: 64 00:09:41.852 Min: 64 00:09:41.852 Completion Queue Entry Size 00:09:41.852 Max: 16 00:09:41.852 Min: 16 00:09:41.852 Number of Namespaces: 256 00:09:41.852 Compare Command: Supported 00:09:41.852 Write Uncorrectable Command: Not Supported 00:09:41.852 Dataset Management Command: Supported 00:09:41.852 Write Zeroes Command: Supported 00:09:41.852 Set Features Save Field: Supported 00:09:41.852 Reservations: Not Supported 00:09:41.852 Timestamp: Supported 00:09:41.852 Copy: Supported 00:09:41.852 Volatile Write Cache: Present 00:09:41.852 Atomic Write Unit (Normal): 1 00:09:41.852 Atomic Write Unit (PFail): 1 00:09:41.852 Atomic Compare & Write Unit: 1 00:09:41.852 Fused Compare & Write: Not Supported 00:09:41.852 Scatter-Gather List 00:09:41.852 SGL Command Set: Supported 00:09:41.852 SGL Keyed: Not Supported 00:09:41.852 SGL Bit Bucket Descriptor: Not Supported 00:09:41.852 SGL Metadata Pointer: Not Supported 00:09:41.852 Oversized SGL: Not Supported 00:09:41.852 SGL Metadata Address: Not Supported 00:09:41.852 SGL Offset: Not Supported 00:09:41.852 Transport SGL Data Block: Not Supported 00:09:41.852 Replay Protected Memory Block: Not Supported 00:09:41.852 00:09:41.852 Firmware Slot Information 00:09:41.852 ========================= 00:09:41.852 Active slot: 1 00:09:41.852 Slot 1 Firmware Revision: 1.0 00:09:41.852 00:09:41.852 00:09:41.852 Commands Supported and Effects 00:09:41.852 ============================== 00:09:41.852 Admin Commands 00:09:41.852 -------------- 00:09:41.852 Delete I/O Submission Queue (00h): Supported 00:09:41.852 Create I/O Submission Queue (01h): Supported 00:09:41.852 Get Log Page (02h): Supported 00:09:41.852 Delete I/O Completion Queue (04h): Supported 00:09:41.852 Create I/O Completion Queue (05h): Supported 00:09:41.852 Identify (06h): Supported 00:09:41.852 Abort (08h): Supported 00:09:41.852 Set Features (09h): Supported 00:09:41.852 Get Features (0Ah): Supported 00:09:41.852 Asynchronous Event Request (0Ch): Supported 00:09:41.852 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:41.852 Directive Send (19h): Supported 00:09:41.852 Directive Receive (1Ah): Supported 00:09:41.852 Virtualization Management (1Ch): Supported 00:09:41.852 Doorbell Buffer Config (7Ch): Supported 00:09:41.852 Format NVM (80h): Supported LBA-Change 00:09:41.852 I/O Commands 00:09:41.852 ------------ 00:09:41.852 Flush (00h): Supported LBA-Change 00:09:41.852 Write (01h): Supported LBA-Change 00:09:41.852 Read (02h): Supported 00:09:41.852 Compare (05h): Supported 00:09:41.852 Write Zeroes (08h): Supported LBA-Change 00:09:41.852 Dataset Management (09h): Supported LBA-Change 00:09:41.852 Unknown (0Ch): Supported 00:09:41.852 Unknown (12h): Supported 00:09:41.852 Copy (19h): Supported LBA-Change 00:09:41.852 Unknown (1Dh): Supported LBA-Change 00:09:41.852 00:09:41.852 Error Log 00:09:41.852 ========= 00:09:41.852 00:09:41.852 Arbitration 00:09:41.852 =========== 00:09:41.852 Arbitration Burst: no limit 00:09:41.852 00:09:41.852 Power Management 00:09:41.852 ================ 00:09:41.852 Number of Power States: 1 00:09:41.852 Current Power State: Power State #0 00:09:41.852 Power State #0: 00:09:41.852 Max Power: 25.00 W 00:09:41.852 Non-Operational State: Operational 00:09:41.852 Entry Latency: 16 microseconds 00:09:41.852 Exit Latency: 4 microseconds 00:09:41.852 Relative Read Throughput: 0 00:09:41.852 Relative Read Latency: 0 00:09:41.852 Relative Write Throughput: 0 00:09:41.852 Relative Write Latency: 0 00:09:41.852 Idle Power: Not Reported 00:09:41.852 Active Power: Not Reported 00:09:41.852 Non-Operational Permissive Mode: Not Supported 00:09:41.852 00:09:41.852 Health Information 00:09:41.852 ================== 00:09:41.852 Critical Warnings: 00:09:41.852 Available Spare Space: OK 00:09:41.852 Temperature: OK 00:09:41.852 Device Reliability: OK 00:09:41.852 Read Only: No 00:09:41.852 Volatile Memory Backup: OK 00:09:41.852 Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.852 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:41.852 Available Spare: 0% 00:09:41.852 Available Spare Threshold: 0% 00:09:41.852 Life Percentage Used: 0% 00:09:41.852 Data Units Read: 1881 00:09:41.852 Data Units Written: 871 00:09:41.852 Host Read Commands: 96920 00:09:41.852 Host Write Commands: 48279 00:09:41.852 Controller Busy Time: 0 minutes 00:09:41.852 Power Cycles: 0 00:09:41.852 Power On Hours: 0 hours 00:09:41.852 Unsafe Shutdowns: 0 00:09:41.852 Unrecoverable Media Errors: 0 00:09:41.852 Lifetime Error Log Entries: 0 00:09:41.852 Warning Temperature Time: 0 minutes 00:09:41.852 Critical Temperature Time: 0 minutes 00:09:41.852 00:09:41.852 Number of Queues 00:09:41.852 ================ 00:09:41.852 Number of I/O Submission Queues: 64 00:09:41.852 Number of I/O Completion Queues: 64 00:09:41.852 00:09:41.852 ZNS Specific Controller Data 00:09:41.852 ============================ 00:09:41.852 Zone Append Size Limit: 0 00:09:41.852 00:09:41.852 00:09:41.852 Active Namespaces 00:09:41.852 ================= 00:09:41.852 Namespace ID:1 00:09:41.852 Error Recovery Timeout: Unlimited 00:09:41.852 Command Set Identifier: NVM (00h) 00:09:41.852 Deallocate: Supported 00:09:41.852 Deallocated/Unwritten Error: Supported 00:09:41.852 Deallocated Read Value: All 0x00 00:09:41.852 Deallocate in Write Zeroes: Not Supported 00:09:41.852 Deallocated Guard Field: 0xFFFF 00:09:41.852 Flush: Supported 00:09:41.852 Reservation: Not Supported 00:09:41.852 Metadata Transferred as: Separate Metadata Buffer 00:09:41.852 Namespace Sharing Capabilities: Private 00:09:41.852 Size (in LBAs): 1548666 (5GiB) 00:09:41.852 Capacity (in LBAs): 1548666 (5GiB) 00:09:41.852 Utilization (in LBAs): 1548666 (5GiB) 00:09:41.852 Thin Provisioning: Not Supported 00:09:41.852 Per-NS Atomic Units: No 00:09:41.852 Maximum Single Source Range Length: 128 00:09:41.852 Maximum Copy Length: 128 00:09:41.852 Maximum Source Range Count: 128 00:09:41.852 NGUID/EUI64 Never Reused: No 00:09:41.852 Namespace Write Protected: No 00:09:41.852 Number of LBA Formats: 8 00:09:41.852 Current LBA Format: LBA Format #07 00:09:41.852 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.852 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.852 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.852 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.852 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.852 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.852 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.852 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.852 00:09:41.852 15:51:53 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:41.852 15:51:53 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:09:42.112 ===================================================== 00:09:42.112 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:42.112 ===================================================== 00:09:42.112 Controller Capabilities/Features 00:09:42.112 ================================ 00:09:42.112 Vendor ID: 1b36 00:09:42.112 Subsystem Vendor ID: 1af4 00:09:42.112 Serial Number: 12341 00:09:42.112 Model Number: QEMU NVMe Ctrl 00:09:42.112 Firmware Version: 8.0.0 00:09:42.112 Recommended Arb Burst: 6 00:09:42.112 IEEE OUI Identifier: 00 54 52 00:09:42.112 Multi-path I/O 00:09:42.112 May have multiple subsystem ports: No 00:09:42.112 May have multiple controllers: No 00:09:42.112 Associated with SR-IOV VF: No 00:09:42.112 Max Data Transfer Size: 524288 00:09:42.112 Max Number of Namespaces: 256 00:09:42.112 Max Number of I/O Queues: 64 00:09:42.112 NVMe Specification Version (VS): 1.4 00:09:42.112 NVMe Specification Version (Identify): 1.4 00:09:42.112 Maximum Queue Entries: 2048 00:09:42.112 Contiguous Queues Required: Yes 00:09:42.112 Arbitration Mechanisms Supported 00:09:42.112 Weighted Round Robin: Not Supported 00:09:42.112 Vendor Specific: Not Supported 00:09:42.112 Reset Timeout: 7500 ms 00:09:42.112 Doorbell Stride: 4 bytes 00:09:42.112 NVM Subsystem Reset: Not Supported 00:09:42.112 Command Sets Supported 00:09:42.112 NVM Command Set: Supported 00:09:42.112 Boot Partition: Not Supported 00:09:42.112 Memory Page Size Minimum: 4096 bytes 00:09:42.112 Memory Page Size Maximum: 65536 bytes 00:09:42.112 Persistent Memory Region: Not Supported 00:09:42.112 Optional Asynchronous Events Supported 00:09:42.112 Namespace Attribute Notices: Supported 00:09:42.112 Firmware Activation Notices: Not Supported 00:09:42.112 ANA Change Notices: Not Supported 00:09:42.112 PLE Aggregate Log Change Notices: Not Supported 00:09:42.112 LBA Status Info Alert Notices: Not Supported 00:09:42.112 EGE Aggregate Log Change Notices: Not Supported 00:09:42.112 Normal NVM Subsystem Shutdown event: Not Supported 00:09:42.112 Zone Descriptor Change Notices: Not Supported 00:09:42.112 Discovery Log Change Notices: Not Supported 00:09:42.112 Controller Attributes 00:09:42.112 128-bit Host Identifier: Not Supported 00:09:42.112 Non-Operational Permissive Mode: Not Supported 00:09:42.112 NVM Sets: Not Supported 00:09:42.112 Read Recovery Levels: Not Supported 00:09:42.112 Endurance Groups: Not Supported 00:09:42.112 Predictable Latency Mode: Not Supported 00:09:42.112 Traffic Based Keep ALive: Not Supported 00:09:42.112 Namespace Granularity: Not Supported 00:09:42.112 SQ Associations: Not Supported 00:09:42.112 UUID List: Not Supported 00:09:42.112 Multi-Domain Subsystem: Not Supported 00:09:42.112 Fixed Capacity Management: Not Supported 00:09:42.112 Variable Capacity Management: Not Supported 00:09:42.112 Delete Endurance Group: Not Supported 00:09:42.112 Delete NVM Set: Not Supported 00:09:42.112 Extended LBA Formats Supported: Supported 00:09:42.112 Flexible Data Placement Supported: Not Supported 00:09:42.112 00:09:42.112 Controller Memory Buffer Support 00:09:42.112 ================================ 00:09:42.112 Supported: No 00:09:42.112 00:09:42.112 Persistent Memory Region Support 00:09:42.112 ================================ 00:09:42.112 Supported: No 00:09:42.112 00:09:42.112 Admin Command Set Attributes 00:09:42.112 ============================ 00:09:42.112 Security Send/Receive: Not Supported 00:09:42.112 Format NVM: Supported 00:09:42.112 Firmware Activate/Download: Not Supported 00:09:42.112 Namespace Management: Supported 00:09:42.112 Device Self-Test: Not Supported 00:09:42.112 Directives: Supported 00:09:42.112 NVMe-MI: Not Supported 00:09:42.112 Virtualization Management: Not Supported 00:09:42.112 Doorbell Buffer Config: Supported 00:09:42.112 Get LBA Status Capability: Not Supported 00:09:42.112 Command & Feature Lockdown Capability: Not Supported 00:09:42.112 Abort Command Limit: 4 00:09:42.112 Async Event Request Limit: 4 00:09:42.112 Number of Firmware Slots: N/A 00:09:42.112 Firmware Slot 1 Read-Only: N/A 00:09:42.112 Firmware Activation Without Reset: N/A 00:09:42.112 Multiple Update Detection Support: N/A 00:09:42.112 Firmware Update Granularity: No Information Provided 00:09:42.112 Per-Namespace SMART Log: Yes 00:09:42.112 Asymmetric Namespace Access Log Page: Not Supported 00:09:42.112 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:42.112 Command Effects Log Page: Supported 00:09:42.112 Get Log Page Extended Data: Supported 00:09:42.112 Telemetry Log Pages: Not Supported 00:09:42.112 Persistent Event Log Pages: Not Supported 00:09:42.112 Supported Log Pages Log Page: May Support 00:09:42.112 Commands Supported & Effects Log Page: Not Supported 00:09:42.112 Feature Identifiers & Effects Log Page:May Support 00:09:42.112 NVMe-MI Commands & Effects Log Page: May Support 00:09:42.112 Data Area 4 for Telemetry Log: Not Supported 00:09:42.112 Error Log Page Entries Supported: 1 00:09:42.112 Keep Alive: Not Supported 00:09:42.112 00:09:42.112 NVM Command Set Attributes 00:09:42.112 ========================== 00:09:42.112 Submission Queue Entry Size 00:09:42.112 Max: 64 00:09:42.112 Min: 64 00:09:42.112 Completion Queue Entry Size 00:09:42.112 Max: 16 00:09:42.112 Min: 16 00:09:42.112 Number of Namespaces: 256 00:09:42.112 Compare Command: Supported 00:09:42.112 Write Uncorrectable Command: Not Supported 00:09:42.112 Dataset Management Command: Supported 00:09:42.112 Write Zeroes Command: Supported 00:09:42.112 Set Features Save Field: Supported 00:09:42.112 Reservations: Not Supported 00:09:42.112 Timestamp: Supported 00:09:42.112 Copy: Supported 00:09:42.112 Volatile Write Cache: Present 00:09:42.112 Atomic Write Unit (Normal): 1 00:09:42.112 Atomic Write Unit (PFail): 1 00:09:42.112 Atomic Compare & Write Unit: 1 00:09:42.112 Fused Compare & Write: Not Supported 00:09:42.112 Scatter-Gather List 00:09:42.112 SGL Command Set: Supported 00:09:42.112 SGL Keyed: Not Supported 00:09:42.112 SGL Bit Bucket Descriptor: Not Supported 00:09:42.112 SGL Metadata Pointer: Not Supported 00:09:42.112 Oversized SGL: Not Supported 00:09:42.112 SGL Metadata Address: Not Supported 00:09:42.112 SGL Offset: Not Supported 00:09:42.112 Transport SGL Data Block: Not Supported 00:09:42.112 Replay Protected Memory Block: Not Supported 00:09:42.112 00:09:42.112 Firmware Slot Information 00:09:42.112 ========================= 00:09:42.112 Active slot: 1 00:09:42.112 Slot 1 Firmware Revision: 1.0 00:09:42.112 00:09:42.112 00:09:42.112 Commands Supported and Effects 00:09:42.112 ============================== 00:09:42.112 Admin Commands 00:09:42.112 -------------- 00:09:42.112 Delete I/O Submission Queue (00h): Supported 00:09:42.112 Create I/O Submission Queue (01h): Supported 00:09:42.112 Get Log Page (02h): Supported 00:09:42.112 Delete I/O Completion Queue (04h): Supported 00:09:42.112 Create I/O Completion Queue (05h): Supported 00:09:42.112 Identify (06h): Supported 00:09:42.112 Abort (08h): Supported 00:09:42.112 Set Features (09h): Supported 00:09:42.112 Get Features (0Ah): Supported 00:09:42.113 Asynchronous Event Request (0Ch): Supported 00:09:42.113 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:42.113 Directive Send (19h): Supported 00:09:42.113 Directive Receive (1Ah): Supported 00:09:42.113 Virtualization Management (1Ch): Supported 00:09:42.113 Doorbell Buffer Config (7Ch): Supported 00:09:42.113 Format NVM (80h): Supported LBA-Change 00:09:42.113 I/O Commands 00:09:42.113 ------------ 00:09:42.113 Flush (00h): Supported LBA-Change 00:09:42.113 Write (01h): Supported LBA-Change 00:09:42.113 Read (02h): Supported 00:09:42.113 Compare (05h): Supported 00:09:42.113 Write Zeroes (08h): Supported LBA-Change 00:09:42.113 Dataset Management (09h): Supported LBA-Change 00:09:42.113 Unknown (0Ch): Supported 00:09:42.113 Unknown (12h): Supported 00:09:42.113 Copy (19h): Supported LBA-Change 00:09:42.113 Unknown (1Dh): Supported LBA-Change 00:09:42.113 00:09:42.113 Error Log 00:09:42.113 ========= 00:09:42.113 00:09:42.113 Arbitration 00:09:42.113 =========== 00:09:42.113 Arbitration Burst: no limit 00:09:42.113 00:09:42.113 Power Management 00:09:42.113 ================ 00:09:42.113 Number of Power States: 1 00:09:42.113 Current Power State: Power State #0 00:09:42.113 Power State #0: 00:09:42.113 Max Power: 25.00 W 00:09:42.113 Non-Operational State: Operational 00:09:42.113 Entry Latency: 16 microseconds 00:09:42.113 Exit Latency: 4 microseconds 00:09:42.113 Relative Read Throughput: 0 00:09:42.113 Relative Read Latency: 0 00:09:42.113 Relative Write Throughput: 0 00:09:42.113 Relative Write Latency: 0 00:09:42.113 Idle Power: Not Reported 00:09:42.113 Active Power: Not Reported 00:09:42.113 Non-Operational Permissive Mode: Not Supported 00:09:42.113 00:09:42.113 Health Information 00:09:42.113 ================== 00:09:42.113 Critical Warnings: 00:09:42.113 Available Spare Space: OK 00:09:42.113 Temperature: OK 00:09:42.113 Device Reliability: OK 00:09:42.113 Read Only: No 00:09:42.113 Volatile Memory Backup: OK 00:09:42.113 Current Temperature: 323 Kelvin (50 Celsius) 00:09:42.113 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:42.113 Available Spare: 0% 00:09:42.113 Available Spare Threshold: 0% 00:09:42.113 Life Percentage Used: 0% 00:09:42.113 Data Units Read: 1294 00:09:42.113 Data Units Written: 601 00:09:42.113 Host Read Commands: 65172 00:09:42.113 Host Write Commands: 32160 00:09:42.113 Controller Busy Time: 0 minutes 00:09:42.113 Power Cycles: 0 00:09:42.113 Power On Hours: 0 hours 00:09:42.113 Unsafe Shutdowns: 0 00:09:42.113 Unrecoverable Media Errors: 0 00:09:42.113 Lifetime Error Log Entries: 0 00:09:42.113 Warning Temperature Time: 0 minutes 00:09:42.113 Critical Temperature Time: 0 minutes 00:09:42.113 00:09:42.113 Number of Queues 00:09:42.113 ================ 00:09:42.113 Number of I/O Submission Queues: 64 00:09:42.113 Number of I/O Completion Queues: 64 00:09:42.113 00:09:42.113 ZNS Specific Controller Data 00:09:42.113 ============================ 00:09:42.113 Zone Append Size Limit: 0 00:09:42.113 00:09:42.113 00:09:42.113 Active Namespaces 00:09:42.113 ================= 00:09:42.113 Namespace ID:1 00:09:42.113 Error Recovery Timeout: Unlimited 00:09:42.113 Command Set Identifier: NVM (00h) 00:09:42.113 Deallocate: Supported 00:09:42.113 Deallocated/Unwritten Error: Supported 00:09:42.113 Deallocated Read Value: All 0x00 00:09:42.113 Deallocate in Write Zeroes: Not Supported 00:09:42.113 Deallocated Guard Field: 0xFFFF 00:09:42.113 Flush: Supported 00:09:42.113 Reservation: Not Supported 00:09:42.113 Namespace Sharing Capabilities: Private 00:09:42.113 Size (in LBAs): 1310720 (5GiB) 00:09:42.113 Capacity (in LBAs): 1310720 (5GiB) 00:09:42.113 Utilization (in LBAs): 1310720 (5GiB) 00:09:42.113 Thin Provisioning: Not Supported 00:09:42.113 Per-NS Atomic Units: No 00:09:42.113 Maximum Single Source Range Length: 128 00:09:42.113 Maximum Copy Length: 128 00:09:42.113 Maximum Source Range Count: 128 00:09:42.113 NGUID/EUI64 Never Reused: No 00:09:42.113 Namespace Write Protected: No 00:09:42.113 Number of LBA Formats: 8 00:09:42.113 Current LBA Format: LBA Format #04 00:09:42.113 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:42.113 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:42.113 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:42.113 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:42.113 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:42.113 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:42.113 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:42.113 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:42.113 00:09:42.113 15:51:53 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:42.113 15:51:53 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:09:42.372 ===================================================== 00:09:42.372 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:42.372 ===================================================== 00:09:42.372 Controller Capabilities/Features 00:09:42.372 ================================ 00:09:42.372 Vendor ID: 1b36 00:09:42.372 Subsystem Vendor ID: 1af4 00:09:42.372 Serial Number: 12342 00:09:42.372 Model Number: QEMU NVMe Ctrl 00:09:42.372 Firmware Version: 8.0.0 00:09:42.372 Recommended Arb Burst: 6 00:09:42.372 IEEE OUI Identifier: 00 54 52 00:09:42.372 Multi-path I/O 00:09:42.372 May have multiple subsystem ports: No 00:09:42.372 May have multiple controllers: No 00:09:42.372 Associated with SR-IOV VF: No 00:09:42.372 Max Data Transfer Size: 524288 00:09:42.372 Max Number of Namespaces: 256 00:09:42.372 Max Number of I/O Queues: 64 00:09:42.372 NVMe Specification Version (VS): 1.4 00:09:42.372 NVMe Specification Version (Identify): 1.4 00:09:42.372 Maximum Queue Entries: 2048 00:09:42.372 Contiguous Queues Required: Yes 00:09:42.372 Arbitration Mechanisms Supported 00:09:42.372 Weighted Round Robin: Not Supported 00:09:42.372 Vendor Specific: Not Supported 00:09:42.372 Reset Timeout: 7500 ms 00:09:42.372 Doorbell Stride: 4 bytes 00:09:42.372 NVM Subsystem Reset: Not Supported 00:09:42.372 Command Sets Supported 00:09:42.372 NVM Command Set: Supported 00:09:42.372 Boot Partition: Not Supported 00:09:42.372 Memory Page Size Minimum: 4096 bytes 00:09:42.372 Memory Page Size Maximum: 65536 bytes 00:09:42.372 Persistent Memory Region: Not Supported 00:09:42.372 Optional Asynchronous Events Supported 00:09:42.372 Namespace Attribute Notices: Supported 00:09:42.372 Firmware Activation Notices: Not Supported 00:09:42.372 ANA Change Notices: Not Supported 00:09:42.372 PLE Aggregate Log Change Notices: Not Supported 00:09:42.372 LBA Status Info Alert Notices: Not Supported 00:09:42.372 EGE Aggregate Log Change Notices: Not Supported 00:09:42.372 Normal NVM Subsystem Shutdown event: Not Supported 00:09:42.372 Zone Descriptor Change Notices: Not Supported 00:09:42.372 Discovery Log Change Notices: Not Supported 00:09:42.372 Controller Attributes 00:09:42.372 128-bit Host Identifier: Not Supported 00:09:42.372 Non-Operational Permissive Mode: Not Supported 00:09:42.372 NVM Sets: Not Supported 00:09:42.372 Read Recovery Levels: Not Supported 00:09:42.372 Endurance Groups: Not Supported 00:09:42.372 Predictable Latency Mode: Not Supported 00:09:42.372 Traffic Based Keep ALive: Not Supported 00:09:42.372 Namespace Granularity: Not Supported 00:09:42.372 SQ Associations: Not Supported 00:09:42.372 UUID List: Not Supported 00:09:42.372 Multi-Domain Subsystem: Not Supported 00:09:42.372 Fixed Capacity Management: Not Supported 00:09:42.372 Variable Capacity Management: Not Supported 00:09:42.372 Delete Endurance Group: Not Supported 00:09:42.372 Delete NVM Set: Not Supported 00:09:42.372 Extended LBA Formats Supported: Supported 00:09:42.372 Flexible Data Placement Supported: Not Supported 00:09:42.372 00:09:42.372 Controller Memory Buffer Support 00:09:42.372 ================================ 00:09:42.372 Supported: No 00:09:42.372 00:09:42.372 Persistent Memory Region Support 00:09:42.372 ================================ 00:09:42.372 Supported: No 00:09:42.372 00:09:42.372 Admin Command Set Attributes 00:09:42.372 ============================ 00:09:42.372 Security Send/Receive: Not Supported 00:09:42.372 Format NVM: Supported 00:09:42.372 Firmware Activate/Download: Not Supported 00:09:42.372 Namespace Management: Supported 00:09:42.372 Device Self-Test: Not Supported 00:09:42.372 Directives: Supported 00:09:42.372 NVMe-MI: Not Supported 00:09:42.372 Virtualization Management: Not Supported 00:09:42.373 Doorbell Buffer Config: Supported 00:09:42.373 Get LBA Status Capability: Not Supported 00:09:42.373 Command & Feature Lockdown Capability: Not Supported 00:09:42.373 Abort Command Limit: 4 00:09:42.373 Async Event Request Limit: 4 00:09:42.373 Number of Firmware Slots: N/A 00:09:42.373 Firmware Slot 1 Read-Only: N/A 00:09:42.373 Firmware Activation Without Reset: N/A 00:09:42.373 Multiple Update Detection Support: N/A 00:09:42.373 Firmware Update Granularity: No Information Provided 00:09:42.373 Per-Namespace SMART Log: Yes 00:09:42.373 Asymmetric Namespace Access Log Page: Not Supported 00:09:42.373 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:42.373 Command Effects Log Page: Supported 00:09:42.373 Get Log Page Extended Data: Supported 00:09:42.373 Telemetry Log Pages: Not Supported 00:09:42.373 Persistent Event Log Pages: Not Supported 00:09:42.373 Supported Log Pages Log Page: May Support 00:09:42.373 Commands Supported & Effects Log Page: Not Supported 00:09:42.373 Feature Identifiers & Effects Log Page:May Support 00:09:42.373 NVMe-MI Commands & Effects Log Page: May Support 00:09:42.373 Data Area 4 for Telemetry Log: Not Supported 00:09:42.373 Error Log Page Entries Supported: 1 00:09:42.373 Keep Alive: Not Supported 00:09:42.373 00:09:42.373 NVM Command Set Attributes 00:09:42.373 ========================== 00:09:42.373 Submission Queue Entry Size 00:09:42.373 Max: 64 00:09:42.373 Min: 64 00:09:42.373 Completion Queue Entry Size 00:09:42.373 Max: 16 00:09:42.373 Min: 16 00:09:42.373 Number of Namespaces: 256 00:09:42.373 Compare Command: Supported 00:09:42.373 Write Uncorrectable Command: Not Supported 00:09:42.373 Dataset Management Command: Supported 00:09:42.373 Write Zeroes Command: Supported 00:09:42.373 Set Features Save Field: Supported 00:09:42.373 Reservations: Not Supported 00:09:42.373 Timestamp: Supported 00:09:42.373 Copy: Supported 00:09:42.373 Volatile Write Cache: Present 00:09:42.373 Atomic Write Unit (Normal): 1 00:09:42.373 Atomic Write Unit (PFail): 1 00:09:42.373 Atomic Compare & Write Unit: 1 00:09:42.373 Fused Compare & Write: Not Supported 00:09:42.373 Scatter-Gather List 00:09:42.373 SGL Command Set: Supported 00:09:42.373 SGL Keyed: Not Supported 00:09:42.373 SGL Bit Bucket Descriptor: Not Supported 00:09:42.373 SGL Metadata Pointer: Not Supported 00:09:42.373 Oversized SGL: Not Supported 00:09:42.373 SGL Metadata Address: Not Supported 00:09:42.373 SGL Offset: Not Supported 00:09:42.373 Transport SGL Data Block: Not Supported 00:09:42.373 Replay Protected Memory Block: Not Supported 00:09:42.373 00:09:42.373 Firmware Slot Information 00:09:42.373 ========================= 00:09:42.373 Active slot: 1 00:09:42.373 Slot 1 Firmware Revision: 1.0 00:09:42.373 00:09:42.373 00:09:42.373 Commands Supported and Effects 00:09:42.373 ============================== 00:09:42.373 Admin Commands 00:09:42.373 -------------- 00:09:42.373 Delete I/O Submission Queue (00h): Supported 00:09:42.373 Create I/O Submission Queue (01h): Supported 00:09:42.373 Get Log Page (02h): Supported 00:09:42.373 Delete I/O Completion Queue (04h): Supported 00:09:42.373 Create I/O Completion Queue (05h): Supported 00:09:42.373 Identify (06h): Supported 00:09:42.373 Abort (08h): Supported 00:09:42.373 Set Features (09h): Supported 00:09:42.373 Get Features (0Ah): Supported 00:09:42.373 Asynchronous Event Request (0Ch): Supported 00:09:42.373 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:42.373 Directive Send (19h): Supported 00:09:42.373 Directive Receive (1Ah): Supported 00:09:42.373 Virtualization Management (1Ch): Supported 00:09:42.373 Doorbell Buffer Config (7Ch): Supported 00:09:42.373 Format NVM (80h): Supported LBA-Change 00:09:42.373 I/O Commands 00:09:42.373 ------------ 00:09:42.373 Flush (00h): Supported LBA-Change 00:09:42.373 Write (01h): Supported LBA-Change 00:09:42.373 Read (02h): Supported 00:09:42.373 Compare (05h): Supported 00:09:42.373 Write Zeroes (08h): Supported LBA-Change 00:09:42.373 Dataset Management (09h): Supported LBA-Change 00:09:42.373 Unknown (0Ch): Supported 00:09:42.373 Unknown (12h): Supported 00:09:42.373 Copy (19h): Supported LBA-Change 00:09:42.373 Unknown (1Dh): Supported LBA-Change 00:09:42.373 00:09:42.373 Error Log 00:09:42.373 ========= 00:09:42.373 00:09:42.373 Arbitration 00:09:42.373 =========== 00:09:42.373 Arbitration Burst: no limit 00:09:42.373 00:09:42.373 Power Management 00:09:42.373 ================ 00:09:42.373 Number of Power States: 1 00:09:42.373 Current Power State: Power State #0 00:09:42.373 Power State #0: 00:09:42.373 Max Power: 25.00 W 00:09:42.373 Non-Operational State: Operational 00:09:42.373 Entry Latency: 16 microseconds 00:09:42.373 Exit Latency: 4 microseconds 00:09:42.373 Relative Read Throughput: 0 00:09:42.373 Relative Read Latency: 0 00:09:42.373 Relative Write Throughput: 0 00:09:42.373 Relative Write Latency: 0 00:09:42.373 Idle Power: Not Reported 00:09:42.373 Active Power: Not Reported 00:09:42.373 Non-Operational Permissive Mode: Not Supported 00:09:42.373 00:09:42.373 Health Information 00:09:42.373 ================== 00:09:42.373 Critical Warnings: 00:09:42.373 Available Spare Space: OK 00:09:42.373 Temperature: OK 00:09:42.373 Device Reliability: OK 00:09:42.373 Read Only: No 00:09:42.373 Volatile Memory Backup: OK 00:09:42.373 Current Temperature: 323 Kelvin (50 Celsius) 00:09:42.373 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:42.373 Available Spare: 0% 00:09:42.373 Available Spare Threshold: 0% 00:09:42.373 Life Percentage Used: 0% 00:09:42.373 Data Units Read: 4051 00:09:42.373 Data Units Written: 1873 00:09:42.373 Host Read Commands: 197498 00:09:42.373 Host Write Commands: 97234 00:09:42.373 Controller Busy Time: 0 minutes 00:09:42.373 Power Cycles: 0 00:09:42.373 Power On Hours: 0 hours 00:09:42.373 Unsafe Shutdowns: 0 00:09:42.373 Unrecoverable Media Errors: 0 00:09:42.373 Lifetime Error Log Entries: 0 00:09:42.373 Warning Temperature Time: 0 minutes 00:09:42.373 Critical Temperature Time: 0 minutes 00:09:42.373 00:09:42.373 Number of Queues 00:09:42.373 ================ 00:09:42.373 Number of I/O Submission Queues: 64 00:09:42.373 Number of I/O Completion Queues: 64 00:09:42.373 00:09:42.373 ZNS Specific Controller Data 00:09:42.373 ============================ 00:09:42.373 Zone Append Size Limit: 0 00:09:42.373 00:09:42.373 00:09:42.373 Active Namespaces 00:09:42.373 ================= 00:09:42.373 Namespace ID:1 00:09:42.373 Error Recovery Timeout: Unlimited 00:09:42.373 Command Set Identifier: NVM (00h) 00:09:42.373 Deallocate: Supported 00:09:42.373 Deallocated/Unwritten Error: Supported 00:09:42.373 Deallocated Read Value: All 0x00 00:09:42.373 Deallocate in Write Zeroes: Not Supported 00:09:42.373 Deallocated Guard Field: 0xFFFF 00:09:42.373 Flush: Supported 00:09:42.373 Reservation: Not Supported 00:09:42.373 Namespace Sharing Capabilities: Private 00:09:42.373 Size (in LBAs): 1048576 (4GiB) 00:09:42.373 Capacity (in LBAs): 1048576 (4GiB) 00:09:42.373 Utilization (in LBAs): 1048576 (4GiB) 00:09:42.373 Thin Provisioning: Not Supported 00:09:42.373 Per-NS Atomic Units: No 00:09:42.373 Maximum Single Source Range Length: 128 00:09:42.373 Maximum Copy Length: 128 00:09:42.373 Maximum Source Range Count: 128 00:09:42.373 NGUID/EUI64 Never Reused: No 00:09:42.373 Namespace Write Protected: No 00:09:42.373 Number of LBA Formats: 8 00:09:42.373 Current LBA Format: LBA Format #04 00:09:42.373 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:42.373 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:42.373 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:42.373 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:42.373 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:42.373 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:42.373 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:42.373 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:42.373 00:09:42.373 Namespace ID:2 00:09:42.373 Error Recovery Timeout: Unlimited 00:09:42.373 Command Set Identifier: NVM (00h) 00:09:42.373 Deallocate: Supported 00:09:42.373 Deallocated/Unwritten Error: Supported 00:09:42.373 Deallocated Read Value: All 0x00 00:09:42.373 Deallocate in Write Zeroes: Not Supported 00:09:42.373 Deallocated Guard Field: 0xFFFF 00:09:42.373 Flush: Supported 00:09:42.373 Reservation: Not Supported 00:09:42.373 Namespace Sharing Capabilities: Private 00:09:42.373 Size (in LBAs): 1048576 (4GiB) 00:09:42.373 Capacity (in LBAs): 1048576 (4GiB) 00:09:42.373 Utilization (in LBAs): 1048576 (4GiB) 00:09:42.373 Thin Provisioning: Not Supported 00:09:42.373 Per-NS Atomic Units: No 00:09:42.373 Maximum Single Source Range Length: 128 00:09:42.373 Maximum Copy Length: 128 00:09:42.373 Maximum Source Range Count: 128 00:09:42.373 NGUID/EUI64 Never Reused: No 00:09:42.373 Namespace Write Protected: No 00:09:42.373 Number of LBA Formats: 8 00:09:42.373 Current LBA Format: LBA Format #04 00:09:42.373 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:42.373 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:42.373 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:42.373 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:42.373 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:42.373 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:42.373 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:42.373 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:42.373 00:09:42.373 Namespace ID:3 00:09:42.373 Error Recovery Timeout: Unlimited 00:09:42.373 Command Set Identifier: NVM (00h) 00:09:42.373 Deallocate: Supported 00:09:42.373 Deallocated/Unwritten Error: Supported 00:09:42.373 Deallocated Read Value: All 0x00 00:09:42.373 Deallocate in Write Zeroes: Not Supported 00:09:42.373 Deallocated Guard Field: 0xFFFF 00:09:42.373 Flush: Supported 00:09:42.373 Reservation: Not Supported 00:09:42.373 Namespace Sharing Capabilities: Private 00:09:42.373 Size (in LBAs): 1048576 (4GiB) 00:09:42.373 Capacity (in LBAs): 1048576 (4GiB) 00:09:42.373 Utilization (in LBAs): 1048576 (4GiB) 00:09:42.373 Thin Provisioning: Not Supported 00:09:42.373 Per-NS Atomic Units: No 00:09:42.373 Maximum Single Source Range Length: 128 00:09:42.373 Maximum Copy Length: 128 00:09:42.373 Maximum Source Range Count: 128 00:09:42.373 NGUID/EUI64 Never Reused: No 00:09:42.373 Namespace Write Protected: No 00:09:42.373 Number of LBA Formats: 8 00:09:42.373 Current LBA Format: LBA Format #04 00:09:42.373 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:42.373 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:42.373 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:42.373 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:42.373 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:42.373 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:42.373 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:42.373 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:42.373 00:09:42.373 15:51:53 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:42.373 15:51:53 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:09:42.633 ===================================================== 00:09:42.633 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:42.633 ===================================================== 00:09:42.633 Controller Capabilities/Features 00:09:42.633 ================================ 00:09:42.633 Vendor ID: 1b36 00:09:42.633 Subsystem Vendor ID: 1af4 00:09:42.633 Serial Number: 12343 00:09:42.633 Model Number: QEMU NVMe Ctrl 00:09:42.633 Firmware Version: 8.0.0 00:09:42.633 Recommended Arb Burst: 6 00:09:42.633 IEEE OUI Identifier: 00 54 52 00:09:42.633 Multi-path I/O 00:09:42.633 May have multiple subsystem ports: No 00:09:42.633 May have multiple controllers: Yes 00:09:42.633 Associated with SR-IOV VF: No 00:09:42.633 Max Data Transfer Size: 524288 00:09:42.633 Max Number of Namespaces: 256 00:09:42.633 Max Number of I/O Queues: 64 00:09:42.633 NVMe Specification Version (VS): 1.4 00:09:42.633 NVMe Specification Version (Identify): 1.4 00:09:42.633 Maximum Queue Entries: 2048 00:09:42.633 Contiguous Queues Required: Yes 00:09:42.633 Arbitration Mechanisms Supported 00:09:42.633 Weighted Round Robin: Not Supported 00:09:42.633 Vendor Specific: Not Supported 00:09:42.633 Reset Timeout: 7500 ms 00:09:42.633 Doorbell Stride: 4 bytes 00:09:42.633 NVM Subsystem Reset: Not Supported 00:09:42.633 Command Sets Supported 00:09:42.633 NVM Command Set: Supported 00:09:42.633 Boot Partition: Not Supported 00:09:42.633 Memory Page Size Minimum: 4096 bytes 00:09:42.633 Memory Page Size Maximum: 65536 bytes 00:09:42.633 Persistent Memory Region: Not Supported 00:09:42.633 Optional Asynchronous Events Supported 00:09:42.633 Namespace Attribute Notices: Supported 00:09:42.633 Firmware Activation Notices: Not Supported 00:09:42.633 ANA Change Notices: Not Supported 00:09:42.633 PLE Aggregate Log Change Notices: Not Supported 00:09:42.633 LBA Status Info Alert Notices: Not Supported 00:09:42.633 EGE Aggregate Log Change Notices: Not Supported 00:09:42.633 Normal NVM Subsystem Shutdown event: Not Supported 00:09:42.633 Zone Descriptor Change Notices: Not Supported 00:09:42.633 Discovery Log Change Notices: Not Supported 00:09:42.633 Controller Attributes 00:09:42.633 128-bit Host Identifier: Not Supported 00:09:42.633 Non-Operational Permissive Mode: Not Supported 00:09:42.633 NVM Sets: Not Supported 00:09:42.633 Read Recovery Levels: Not Supported 00:09:42.633 Endurance Groups: Supported 00:09:42.633 Predictable Latency Mode: Not Supported 00:09:42.633 Traffic Based Keep ALive: Not Supported 00:09:42.633 Namespace Granularity: Not Supported 00:09:42.633 SQ Associations: Not Supported 00:09:42.633 UUID List: Not Supported 00:09:42.633 Multi-Domain Subsystem: Not Supported 00:09:42.633 Fixed Capacity Management: Not Supported 00:09:42.633 Variable Capacity Management: Not Supported 00:09:42.633 Delete Endurance Group: Not Supported 00:09:42.633 Delete NVM Set: Not Supported 00:09:42.633 Extended LBA Formats Supported: Supported 00:09:42.633 Flexible Data Placement Supported: Supported 00:09:42.633 00:09:42.633 Controller Memory Buffer Support 00:09:42.633 ================================ 00:09:42.633 Supported: No 00:09:42.633 00:09:42.633 Persistent Memory Region Support 00:09:42.633 ================================ 00:09:42.633 Supported: No 00:09:42.633 00:09:42.633 Admin Command Set Attributes 00:09:42.633 ============================ 00:09:42.633 Security Send/Receive: Not Supported 00:09:42.633 Format NVM: Supported 00:09:42.633 Firmware Activate/Download: Not Supported 00:09:42.633 Namespace Management: Supported 00:09:42.633 Device Self-Test: Not Supported 00:09:42.633 Directives: Supported 00:09:42.633 NVMe-MI: Not Supported 00:09:42.633 Virtualization Management: Not Supported 00:09:42.633 Doorbell Buffer Config: Supported 00:09:42.633 Get LBA Status Capability: Not Supported 00:09:42.633 Command & Feature Lockdown Capability: Not Supported 00:09:42.633 Abort Command Limit: 4 00:09:42.633 Async Event Request Limit: 4 00:09:42.633 Number of Firmware Slots: N/A 00:09:42.633 Firmware Slot 1 Read-Only: N/A 00:09:42.633 Firmware Activation Without Reset: N/A 00:09:42.633 Multiple Update Detection Support: N/A 00:09:42.633 Firmware Update Granularity: No Information Provided 00:09:42.633 Per-Namespace SMART Log: Yes 00:09:42.633 Asymmetric Namespace Access Log Page: Not Supported 00:09:42.633 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:42.633 Command Effects Log Page: Supported 00:09:42.633 Get Log Page Extended Data: Supported 00:09:42.633 Telemetry Log Pages: Not Supported 00:09:42.633 Persistent Event Log Pages: Not Supported 00:09:42.633 Supported Log Pages Log Page: May Support 00:09:42.633 Commands Supported & Effects Log Page: Not Supported 00:09:42.633 Feature Identifiers & Effects Log Page:May Support 00:09:42.633 NVMe-MI Commands & Effects Log Page: May Support 00:09:42.633 Data Area 4 for Telemetry Log: Not Supported 00:09:42.633 Error Log Page Entries Supported: 1 00:09:42.633 Keep Alive: Not Supported 00:09:42.633 00:09:42.633 NVM Command Set Attributes 00:09:42.633 ========================== 00:09:42.633 Submission Queue Entry Size 00:09:42.633 Max: 64 00:09:42.633 Min: 64 00:09:42.633 Completion Queue Entry Size 00:09:42.633 Max: 16 00:09:42.633 Min: 16 00:09:42.633 Number of Namespaces: 256 00:09:42.633 Compare Command: Supported 00:09:42.633 Write Uncorrectable Command: Not Supported 00:09:42.633 Dataset Management Command: Supported 00:09:42.633 Write Zeroes Command: Supported 00:09:42.633 Set Features Save Field: Supported 00:09:42.633 Reservations: Not Supported 00:09:42.633 Timestamp: Supported 00:09:42.633 Copy: Supported 00:09:42.633 Volatile Write Cache: Present 00:09:42.633 Atomic Write Unit (Normal): 1 00:09:42.633 Atomic Write Unit (PFail): 1 00:09:42.633 Atomic Compare & Write Unit: 1 00:09:42.633 Fused Compare & Write: Not Supported 00:09:42.633 Scatter-Gather List 00:09:42.633 SGL Command Set: Supported 00:09:42.633 SGL Keyed: Not Supported 00:09:42.633 SGL Bit Bucket Descriptor: Not Supported 00:09:42.633 SGL Metadata Pointer: Not Supported 00:09:42.633 Oversized SGL: Not Supported 00:09:42.633 SGL Metadata Address: Not Supported 00:09:42.633 SGL Offset: Not Supported 00:09:42.633 Transport SGL Data Block: Not Supported 00:09:42.633 Replay Protected Memory Block: Not Supported 00:09:42.633 00:09:42.633 Firmware Slot Information 00:09:42.633 ========================= 00:09:42.633 Active slot: 1 00:09:42.633 Slot 1 Firmware Revision: 1.0 00:09:42.633 00:09:42.633 00:09:42.633 Commands Supported and Effects 00:09:42.633 ============================== 00:09:42.633 Admin Commands 00:09:42.633 -------------- 00:09:42.633 Delete I/O Submission Queue (00h): Supported 00:09:42.633 Create I/O Submission Queue (01h): Supported 00:09:42.633 Get Log Page (02h): Supported 00:09:42.633 Delete I/O Completion Queue (04h): Supported 00:09:42.633 Create I/O Completion Queue (05h): Supported 00:09:42.633 Identify (06h): Supported 00:09:42.633 Abort (08h): Supported 00:09:42.633 Set Features (09h): Supported 00:09:42.633 Get Features (0Ah): Supported 00:09:42.633 Asynchronous Event Request (0Ch): Supported 00:09:42.633 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:42.633 Directive Send (19h): Supported 00:09:42.633 Directive Receive (1Ah): Supported 00:09:42.633 Virtualization Management (1Ch): Supported 00:09:42.634 Doorbell Buffer Config (7Ch): Supported 00:09:42.634 Format NVM (80h): Supported LBA-Change 00:09:42.634 I/O Commands 00:09:42.634 ------------ 00:09:42.634 Flush (00h): Supported LBA-Change 00:09:42.634 Write (01h): Supported LBA-Change 00:09:42.634 Read (02h): Supported 00:09:42.634 Compare (05h): Supported 00:09:42.634 Write Zeroes (08h): Supported LBA-Change 00:09:42.634 Dataset Management (09h): Supported LBA-Change 00:09:42.634 Unknown (0Ch): Supported 00:09:42.634 Unknown (12h): Supported 00:09:42.634 Copy (19h): Supported LBA-Change 00:09:42.634 Unknown (1Dh): Supported LBA-Change 00:09:42.634 00:09:42.634 Error Log 00:09:42.634 ========= 00:09:42.634 00:09:42.634 Arbitration 00:09:42.634 =========== 00:09:42.634 Arbitration Burst: no limit 00:09:42.634 00:09:42.634 Power Management 00:09:42.634 ================ 00:09:42.634 Number of Power States: 1 00:09:42.634 Current Power State: Power State #0 00:09:42.634 Power State #0: 00:09:42.634 Max Power: 25.00 W 00:09:42.634 Non-Operational State: Operational 00:09:42.634 Entry Latency: 16 microseconds 00:09:42.634 Exit Latency: 4 microseconds 00:09:42.634 Relative Read Throughput: 0 00:09:42.634 Relative Read Latency: 0 00:09:42.634 Relative Write Throughput: 0 00:09:42.634 Relative Write Latency: 0 00:09:42.634 Idle Power: Not Reported 00:09:42.634 Active Power: Not Reported 00:09:42.634 Non-Operational Permissive Mode: Not Supported 00:09:42.634 00:09:42.634 Health Information 00:09:42.634 ================== 00:09:42.634 Critical Warnings: 00:09:42.634 Available Spare Space: OK 00:09:42.634 Temperature: OK 00:09:42.634 Device Reliability: OK 00:09:42.634 Read Only: No 00:09:42.634 Volatile Memory Backup: OK 00:09:42.634 Current Temperature: 323 Kelvin (50 Celsius) 00:09:42.634 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:42.634 Available Spare: 0% 00:09:42.634 Available Spare Threshold: 0% 00:09:42.634 Life Percentage Used: 0% 00:09:42.634 Data Units Read: 1441 00:09:42.634 Data Units Written: 672 00:09:42.634 Host Read Commands: 66480 00:09:42.634 Host Write Commands: 32806 00:09:42.634 Controller Busy Time: 0 minutes 00:09:42.634 Power Cycles: 0 00:09:42.634 Power On Hours: 0 hours 00:09:42.634 Unsafe Shutdowns: 0 00:09:42.634 Unrecoverable Media Errors: 0 00:09:42.634 Lifetime Error Log Entries: 0 00:09:42.634 Warning Temperature Time: 0 minutes 00:09:42.634 Critical Temperature Time: 0 minutes 00:09:42.634 00:09:42.634 Number of Queues 00:09:42.634 ================ 00:09:42.634 Number of I/O Submission Queues: 64 00:09:42.634 Number of I/O Completion Queues: 64 00:09:42.634 00:09:42.634 ZNS Specific Controller Data 00:09:42.634 ============================ 00:09:42.634 Zone Append Size Limit: 0 00:09:42.634 00:09:42.634 00:09:42.634 Active Namespaces 00:09:42.634 ================= 00:09:42.634 Namespace ID:1 00:09:42.634 Error Recovery Timeout: Unlimited 00:09:42.634 Command Set Identifier: NVM (00h) 00:09:42.634 Deallocate: Supported 00:09:42.634 Deallocated/Unwritten Error: Supported 00:09:42.634 Deallocated Read Value: All 0x00 00:09:42.634 Deallocate in Write Zeroes: Not Supported 00:09:42.634 Deallocated Guard Field: 0xFFFF 00:09:42.634 Flush: Supported 00:09:42.634 Reservation: Not Supported 00:09:42.634 Namespace Sharing Capabilities: Multiple Controllers 00:09:42.634 Size (in LBAs): 262144 (1GiB) 00:09:42.634 Capacity (in LBAs): 262144 (1GiB) 00:09:42.634 Utilization (in LBAs): 262144 (1GiB) 00:09:42.634 Thin Provisioning: Not Supported 00:09:42.634 Per-NS Atomic Units: No 00:09:42.634 Maximum Single Source Range Length: 128 00:09:42.634 Maximum Copy Length: 128 00:09:42.634 Maximum Source Range Count: 128 00:09:42.634 NGUID/EUI64 Never Reused: No 00:09:42.634 Namespace Write Protected: No 00:09:42.634 Endurance group ID: 1 00:09:42.634 Number of LBA Formats: 8 00:09:42.634 Current LBA Format: LBA Format #04 00:09:42.634 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:42.634 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:42.634 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:42.634 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:42.634 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:42.634 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:42.634 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:42.634 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:42.634 00:09:42.634 Get Feature FDP: 00:09:42.634 ================ 00:09:42.634 Enabled: Yes 00:09:42.634 FDP configuration index: 0 00:09:42.634 00:09:42.634 FDP configurations log page 00:09:42.634 =========================== 00:09:42.634 Number of FDP configurations: 1 00:09:42.634 Version: 0 00:09:42.634 Size: 112 00:09:42.634 FDP Configuration Descriptor: 0 00:09:42.634 Descriptor Size: 96 00:09:42.634 Reclaim Group Identifier format: 2 00:09:42.634 FDP Volatile Write Cache: Not Present 00:09:42.634 FDP Configuration: Valid 00:09:42.634 Vendor Specific Size: 0 00:09:42.634 Number of Reclaim Groups: 2 00:09:42.634 Number of Recalim Unit Handles: 8 00:09:42.634 Max Placement Identifiers: 128 00:09:42.634 Number of Namespaces Suppprted: 256 00:09:42.634 Reclaim unit Nominal Size: 6000000 bytes 00:09:42.634 Estimated Reclaim Unit Time Limit: Not Reported 00:09:42.634 RUH Desc #000: RUH Type: Initially Isolated 00:09:42.634 RUH Desc #001: RUH Type: Initially Isolated 00:09:42.634 RUH Desc #002: RUH Type: Initially Isolated 00:09:42.634 RUH Desc #003: RUH Type: Initially Isolated 00:09:42.634 RUH Desc #004: RUH Type: Initially Isolated 00:09:42.634 RUH Desc #005: RUH Type: Initially Isolated 00:09:42.634 RUH Desc #006: RUH Type: Initially Isolated 00:09:42.634 RUH Desc #007: RUH Type: Initially Isolated 00:09:42.634 00:09:42.634 FDP reclaim unit handle usage log page 00:09:42.634 ====================================== 00:09:42.634 Number of Reclaim Unit Handles: 8 00:09:42.634 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:42.634 RUH Usage Desc #001: RUH Attributes: Unused 00:09:42.634 RUH Usage Desc #002: RUH Attributes: Unused 00:09:42.634 RUH Usage Desc #003: RUH Attributes: Unused 00:09:42.634 RUH Usage Desc #004: RUH Attributes: Unused 00:09:42.634 RUH Usage Desc #005: RUH Attributes: Unused 00:09:42.634 RUH Usage Desc #006: RUH Attributes: Unused 00:09:42.634 RUH Usage Desc #007: RUH Attributes: Unused 00:09:42.634 00:09:42.634 FDP statistics log page 00:09:42.634 ======================= 00:09:42.634 Host bytes with metadata written: 449896448 00:09:42.634 Media bytes with metadata written: 449966080 00:09:42.634 Media bytes erased: 0 00:09:42.634 00:09:42.634 FDP events log page 00:09:42.634 =================== 00:09:42.634 Number of FDP events: 0 00:09:42.634 00:09:42.634 00:09:42.634 real 0m1.137s 00:09:42.634 user 0m0.377s 00:09:42.634 sys 0m0.531s 00:09:42.634 15:51:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:42.634 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:09:42.634 ************************************ 00:09:42.634 END TEST nvme_identify 00:09:42.634 ************************************ 00:09:42.634 15:51:53 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:42.634 15:51:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:42.634 15:51:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:42.634 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:09:42.634 ************************************ 00:09:42.634 START TEST nvme_perf 00:09:42.634 ************************************ 00:09:42.634 15:51:53 -- common/autotest_common.sh@1114 -- # nvme_perf 00:09:42.634 15:51:53 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:44.008 Initializing NVMe Controllers 00:09:44.008 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:44.008 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:44.008 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:44.008 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:44.008 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:44.008 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:44.008 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:44.008 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:44.008 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:44.008 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:44.008 Initialization complete. Launching workers. 00:09:44.008 ======================================================== 00:09:44.008 Latency(us) 00:09:44.008 Device Information : IOPS MiB/s Average min max 00:09:44.008 PCIE (0000:00:09.0) NSID 1 from core 0: 20260.56 237.43 6315.49 4977.58 27830.04 00:09:44.008 PCIE (0000:00:06.0) NSID 1 from core 0: 20260.56 237.43 6309.94 4820.26 27001.30 00:09:44.008 PCIE (0000:00:07.0) NSID 1 from core 0: 20260.56 237.43 6305.35 5002.78 25570.11 00:09:44.008 PCIE (0000:00:08.0) NSID 1 from core 0: 20260.56 237.43 6300.02 4993.10 24998.56 00:09:44.008 PCIE (0000:00:08.0) NSID 2 from core 0: 20260.56 237.43 6294.81 4989.57 23664.52 00:09:44.008 PCIE (0000:00:08.0) NSID 3 from core 0: 20387.99 238.92 6250.41 4985.64 15807.47 00:09:44.008 ======================================================== 00:09:44.008 Total : 121690.81 1426.06 6295.96 4820.26 27830.04 00:09:44.008 00:09:44.008 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:44.008 ================================================================================= 00:09:44.008 1.00000% : 5142.055us 00:09:44.008 10.00000% : 5394.117us 00:09:44.008 25.00000% : 5671.385us 00:09:44.008 50.00000% : 6125.095us 00:09:44.008 75.00000% : 6553.600us 00:09:44.008 90.00000% : 6856.074us 00:09:44.008 95.00000% : 7208.960us 00:09:44.008 98.00000% : 9679.163us 00:09:44.008 99.00000% : 11040.295us 00:09:44.008 99.50000% : 25609.452us 00:09:44.008 99.90000% : 27424.295us 00:09:44.008 99.99000% : 27827.594us 00:09:44.008 99.99900% : 28029.243us 00:09:44.008 99.99990% : 28029.243us 00:09:44.008 99.99999% : 28029.243us 00:09:44.008 00:09:44.008 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:44.008 ================================================================================= 00:09:44.008 1.00000% : 4990.818us 00:09:44.008 10.00000% : 5268.086us 00:09:44.008 25.00000% : 5595.766us 00:09:44.008 50.00000% : 6125.095us 00:09:44.008 75.00000% : 6654.425us 00:09:44.008 90.00000% : 7007.311us 00:09:44.008 95.00000% : 7309.785us 00:09:44.008 98.00000% : 9477.514us 00:09:44.008 99.00000% : 11342.769us 00:09:44.008 99.50000% : 24399.557us 00:09:44.008 99.90000% : 26617.698us 00:09:44.008 99.99000% : 27020.997us 00:09:44.008 99.99900% : 27020.997us 00:09:44.008 99.99990% : 27020.997us 00:09:44.008 99.99999% : 27020.997us 00:09:44.008 00:09:44.009 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:44.009 ================================================================================= 00:09:44.009 1.00000% : 5142.055us 00:09:44.009 10.00000% : 5394.117us 00:09:44.009 25.00000% : 5671.385us 00:09:44.009 50.00000% : 6125.095us 00:09:44.009 75.00000% : 6553.600us 00:09:44.009 90.00000% : 6856.074us 00:09:44.009 95.00000% : 7208.960us 00:09:44.009 98.00000% : 9023.803us 00:09:44.009 99.00000% : 13107.200us 00:09:44.009 99.50000% : 23189.662us 00:09:44.009 99.90000% : 25105.329us 00:09:44.009 99.99000% : 25609.452us 00:09:44.009 99.99900% : 25609.452us 00:09:44.009 99.99990% : 25609.452us 00:09:44.009 99.99999% : 25609.452us 00:09:44.009 00:09:44.009 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:44.009 ================================================================================= 00:09:44.009 1.00000% : 5142.055us 00:09:44.009 10.00000% : 5394.117us 00:09:44.009 25.00000% : 5671.385us 00:09:44.009 50.00000% : 6125.095us 00:09:44.009 75.00000% : 6604.012us 00:09:44.009 90.00000% : 6856.074us 00:09:44.009 95.00000% : 7259.372us 00:09:44.009 98.00000% : 8570.092us 00:09:44.009 99.00000% : 13812.972us 00:09:44.009 99.50000% : 22584.714us 00:09:44.009 99.90000% : 24601.206us 00:09:44.009 99.99000% : 25004.505us 00:09:44.009 99.99900% : 25004.505us 00:09:44.009 99.99990% : 25004.505us 00:09:44.009 99.99999% : 25004.505us 00:09:44.009 00:09:44.009 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:44.009 ================================================================================= 00:09:44.009 1.00000% : 5167.262us 00:09:44.009 10.00000% : 5394.117us 00:09:44.009 25.00000% : 5671.385us 00:09:44.009 50.00000% : 6125.095us 00:09:44.009 75.00000% : 6553.600us 00:09:44.009 90.00000% : 6856.074us 00:09:44.009 95.00000% : 7259.372us 00:09:44.009 98.00000% : 9124.628us 00:09:44.009 99.00000% : 12552.665us 00:09:44.009 99.50000% : 21273.994us 00:09:44.009 99.90000% : 23189.662us 00:09:44.009 99.99000% : 23693.785us 00:09:44.009 99.99900% : 23693.785us 00:09:44.009 99.99990% : 23693.785us 00:09:44.009 99.99999% : 23693.785us 00:09:44.009 00:09:44.009 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:44.009 ================================================================================= 00:09:44.009 1.00000% : 5167.262us 00:09:44.009 10.00000% : 5394.117us 00:09:44.009 25.00000% : 5671.385us 00:09:44.009 50.00000% : 6125.095us 00:09:44.009 75.00000% : 6604.012us 00:09:44.009 90.00000% : 6856.074us 00:09:44.009 95.00000% : 7360.197us 00:09:44.009 98.00000% : 9679.163us 00:09:44.009 99.00000% : 11393.182us 00:09:44.009 99.50000% : 13409.674us 00:09:44.009 99.90000% : 15426.166us 00:09:44.009 99.99000% : 15829.465us 00:09:44.009 99.99900% : 15829.465us 00:09:44.009 99.99990% : 15829.465us 00:09:44.009 99.99999% : 15829.465us 00:09:44.009 00:09:44.009 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:44.009 ============================================================================== 00:09:44.009 Range in us Cumulative IO count 00:09:44.009 4965.612 - 4990.818: 0.0098% ( 2) 00:09:44.009 4990.818 - 5016.025: 0.0344% ( 5) 00:09:44.009 5016.025 - 5041.231: 0.1720% ( 28) 00:09:44.009 5041.231 - 5066.437: 0.3243% ( 31) 00:09:44.009 5066.437 - 5091.643: 0.5061% ( 37) 00:09:44.009 5091.643 - 5116.849: 0.7862% ( 57) 00:09:44.009 5116.849 - 5142.055: 1.1006% ( 64) 00:09:44.009 5142.055 - 5167.262: 1.4249% ( 66) 00:09:44.009 5167.262 - 5192.468: 1.8377% ( 84) 00:09:44.009 5192.468 - 5217.674: 2.5256% ( 140) 00:09:44.009 5217.674 - 5242.880: 3.4149% ( 181) 00:09:44.009 5242.880 - 5268.086: 4.3829% ( 197) 00:09:44.009 5268.086 - 5293.292: 5.4982% ( 227) 00:09:44.009 5293.292 - 5318.498: 6.7610% ( 257) 00:09:44.009 5318.498 - 5343.705: 8.1024% ( 273) 00:09:44.009 5343.705 - 5368.911: 9.3750% ( 259) 00:09:44.009 5368.911 - 5394.117: 10.6673% ( 263) 00:09:44.009 5394.117 - 5419.323: 12.0824% ( 288) 00:09:44.009 5419.323 - 5444.529: 13.4237% ( 273) 00:09:44.009 5444.529 - 5469.735: 14.7209% ( 264) 00:09:44.009 5469.735 - 5494.942: 16.1065% ( 282) 00:09:44.009 5494.942 - 5520.148: 17.5118% ( 286) 00:09:44.009 5520.148 - 5545.354: 18.9121% ( 285) 00:09:44.009 5545.354 - 5570.560: 20.2732% ( 277) 00:09:44.009 5570.560 - 5595.766: 21.6293% ( 276) 00:09:44.009 5595.766 - 5620.972: 23.0002% ( 279) 00:09:44.009 5620.972 - 5646.178: 24.3612% ( 277) 00:09:44.009 5646.178 - 5671.385: 25.7812% ( 289) 00:09:44.009 5671.385 - 5696.591: 27.1816% ( 285) 00:09:44.009 5696.591 - 5721.797: 28.5770% ( 284) 00:09:44.009 5721.797 - 5747.003: 30.0118% ( 292) 00:09:44.009 5747.003 - 5772.209: 31.3876% ( 280) 00:09:44.009 5772.209 - 5797.415: 32.8567% ( 299) 00:09:44.009 5797.415 - 5822.622: 34.2571% ( 285) 00:09:44.009 5822.622 - 5847.828: 35.6771% ( 289) 00:09:44.009 5847.828 - 5873.034: 37.0971% ( 289) 00:09:44.009 5873.034 - 5898.240: 38.4925% ( 284) 00:09:44.009 5898.240 - 5923.446: 39.9125% ( 289) 00:09:44.009 5923.446 - 5948.652: 41.3227% ( 287) 00:09:44.009 5948.652 - 5973.858: 42.7329% ( 287) 00:09:44.009 5973.858 - 5999.065: 44.1136% ( 281) 00:09:44.009 5999.065 - 6024.271: 45.5385% ( 290) 00:09:44.009 6024.271 - 6049.477: 46.9290% ( 283) 00:09:44.009 6049.477 - 6074.683: 48.3491% ( 289) 00:09:44.009 6074.683 - 6099.889: 49.7642% ( 288) 00:09:44.009 6099.889 - 6125.095: 51.1891% ( 290) 00:09:44.009 6125.095 - 6150.302: 52.5599% ( 279) 00:09:44.009 6150.302 - 6175.508: 53.9554% ( 284) 00:09:44.009 6175.508 - 6200.714: 55.3607% ( 286) 00:09:44.009 6200.714 - 6225.920: 56.7807% ( 289) 00:09:44.009 6225.920 - 6251.126: 58.2105% ( 291) 00:09:44.009 6251.126 - 6276.332: 59.5814% ( 279) 00:09:44.009 6276.332 - 6301.538: 60.9915% ( 287) 00:09:44.009 6301.538 - 6326.745: 62.4066% ( 288) 00:09:44.009 6326.745 - 6351.951: 63.8217% ( 288) 00:09:44.009 6351.951 - 6377.157: 65.2614% ( 293) 00:09:44.009 6377.157 - 6402.363: 66.6814% ( 289) 00:09:44.009 6402.363 - 6427.569: 68.1014% ( 289) 00:09:44.009 6427.569 - 6452.775: 69.5214% ( 289) 00:09:44.009 6452.775 - 6503.188: 72.3320% ( 572) 00:09:44.009 6503.188 - 6553.600: 75.2211% ( 588) 00:09:44.009 6553.600 - 6604.012: 78.1103% ( 588) 00:09:44.009 6604.012 - 6654.425: 80.9454% ( 577) 00:09:44.009 6654.425 - 6704.837: 83.6675% ( 554) 00:09:44.009 6704.837 - 6755.249: 86.1439% ( 504) 00:09:44.009 6755.249 - 6805.662: 88.4876% ( 477) 00:09:44.009 6805.662 - 6856.074: 90.6053% ( 431) 00:09:44.009 6856.074 - 6906.486: 92.3300% ( 351) 00:09:44.009 6906.486 - 6956.898: 93.4699% ( 232) 00:09:44.009 6956.898 - 7007.311: 94.0645% ( 121) 00:09:44.009 7007.311 - 7057.723: 94.3937% ( 67) 00:09:44.009 7057.723 - 7108.135: 94.6639% ( 55) 00:09:44.009 7108.135 - 7158.548: 94.8949% ( 47) 00:09:44.009 7158.548 - 7208.960: 95.0914% ( 40) 00:09:44.009 7208.960 - 7259.372: 95.2535% ( 33) 00:09:44.009 7259.372 - 7309.785: 95.3715% ( 24) 00:09:44.009 7309.785 - 7360.197: 95.4992% ( 26) 00:09:44.009 7360.197 - 7410.609: 95.6073% ( 22) 00:09:44.009 7410.609 - 7461.022: 95.6810% ( 15) 00:09:44.009 7461.022 - 7511.434: 95.7449% ( 13) 00:09:44.009 7511.434 - 7561.846: 95.8088% ( 13) 00:09:44.009 7561.846 - 7612.258: 95.8726% ( 13) 00:09:44.009 7612.258 - 7662.671: 95.9414% ( 14) 00:09:44.009 7662.671 - 7713.083: 96.0053% ( 13) 00:09:44.009 7713.083 - 7763.495: 96.0643% ( 12) 00:09:44.009 7763.495 - 7813.908: 96.1134% ( 10) 00:09:44.009 7813.908 - 7864.320: 96.1625% ( 10) 00:09:44.009 7864.320 - 7914.732: 96.2068% ( 9) 00:09:44.009 7914.732 - 7965.145: 96.2559% ( 10) 00:09:44.009 7965.145 - 8015.557: 96.3050% ( 10) 00:09:44.009 8015.557 - 8065.969: 96.3591% ( 11) 00:09:44.009 8065.969 - 8116.382: 96.4033% ( 9) 00:09:44.009 8116.382 - 8166.794: 96.4426% ( 8) 00:09:44.009 8166.794 - 8217.206: 96.4819% ( 8) 00:09:44.009 8217.206 - 8267.618: 96.5065% ( 5) 00:09:44.009 8267.618 - 8318.031: 96.5261% ( 4) 00:09:44.009 8318.031 - 8368.443: 96.5654% ( 8) 00:09:44.009 8368.443 - 8418.855: 96.5998% ( 7) 00:09:44.009 8418.855 - 8469.268: 96.6342% ( 7) 00:09:44.009 8469.268 - 8519.680: 96.6834% ( 10) 00:09:44.009 8519.680 - 8570.092: 96.7325% ( 10) 00:09:44.009 8570.092 - 8620.505: 96.7767% ( 9) 00:09:44.009 8620.505 - 8670.917: 96.8357% ( 12) 00:09:44.009 8670.917 - 8721.329: 96.8996% ( 13) 00:09:44.009 8721.329 - 8771.742: 96.9684% ( 14) 00:09:44.009 8771.742 - 8822.154: 97.0322% ( 13) 00:09:44.009 8822.154 - 8872.566: 97.0961% ( 13) 00:09:44.009 8872.566 - 8922.978: 97.1747% ( 16) 00:09:44.009 8922.978 - 8973.391: 97.2386% ( 13) 00:09:44.009 8973.391 - 9023.803: 97.3123% ( 15) 00:09:44.009 9023.803 - 9074.215: 97.3811% ( 14) 00:09:44.009 9074.215 - 9124.628: 97.4401% ( 12) 00:09:44.009 9124.628 - 9175.040: 97.5138% ( 15) 00:09:44.009 9175.040 - 9225.452: 97.5776% ( 13) 00:09:44.009 9225.452 - 9275.865: 97.6317% ( 11) 00:09:44.009 9275.865 - 9326.277: 97.6759% ( 9) 00:09:44.009 9326.277 - 9376.689: 97.7349% ( 12) 00:09:44.009 9376.689 - 9427.102: 97.7840% ( 10) 00:09:44.009 9427.102 - 9477.514: 97.8331% ( 10) 00:09:44.009 9477.514 - 9527.926: 97.8872% ( 11) 00:09:44.009 9527.926 - 9578.338: 97.9363% ( 10) 00:09:44.009 9578.338 - 9628.751: 97.9904% ( 11) 00:09:44.009 9628.751 - 9679.163: 98.0395% ( 10) 00:09:44.009 9679.163 - 9729.575: 98.0837% ( 9) 00:09:44.009 9729.575 - 9779.988: 98.1378% ( 11) 00:09:44.009 9779.988 - 9830.400: 98.1820% ( 9) 00:09:44.009 9830.400 - 9880.812: 98.2360% ( 11) 00:09:44.009 9880.812 - 9931.225: 98.2852% ( 10) 00:09:44.010 9931.225 - 9981.637: 98.3441% ( 12) 00:09:44.010 9981.637 - 10032.049: 98.3933% ( 10) 00:09:44.010 10032.049 - 10082.462: 98.4228% ( 6) 00:09:44.010 10082.462 - 10132.874: 98.4621% ( 8) 00:09:44.010 10132.874 - 10183.286: 98.4915% ( 6) 00:09:44.010 10183.286 - 10233.698: 98.5259% ( 7) 00:09:44.010 10233.698 - 10284.111: 98.5554% ( 6) 00:09:44.010 10284.111 - 10334.523: 98.5800% ( 5) 00:09:44.010 10334.523 - 10384.935: 98.6193% ( 8) 00:09:44.010 10384.935 - 10435.348: 98.6488% ( 6) 00:09:44.010 10435.348 - 10485.760: 98.6783% ( 6) 00:09:44.010 10485.760 - 10536.172: 98.7077% ( 6) 00:09:44.010 10536.172 - 10586.585: 98.7372% ( 6) 00:09:44.010 10586.585 - 10636.997: 98.7765% ( 8) 00:09:44.010 10636.997 - 10687.409: 98.8060% ( 6) 00:09:44.010 10687.409 - 10737.822: 98.8355% ( 6) 00:09:44.010 10737.822 - 10788.234: 98.8699% ( 7) 00:09:44.010 10788.234 - 10838.646: 98.8994% ( 6) 00:09:44.010 10838.646 - 10889.058: 98.9289% ( 6) 00:09:44.010 10889.058 - 10939.471: 98.9682% ( 8) 00:09:44.010 10939.471 - 10989.883: 98.9976% ( 6) 00:09:44.010 10989.883 - 11040.295: 99.0369% ( 8) 00:09:44.010 11040.295 - 11090.708: 99.0664% ( 6) 00:09:44.010 11090.708 - 11141.120: 99.0959% ( 6) 00:09:44.010 11141.120 - 11191.532: 99.1205% ( 5) 00:09:44.010 11191.532 - 11241.945: 99.1500% ( 6) 00:09:44.010 11241.945 - 11292.357: 99.1794% ( 6) 00:09:44.010 11292.357 - 11342.769: 99.2138% ( 7) 00:09:44.010 11342.769 - 11393.182: 99.2384% ( 5) 00:09:44.010 11393.182 - 11443.594: 99.2679% ( 6) 00:09:44.010 11443.594 - 11494.006: 99.2925% ( 5) 00:09:44.010 11494.006 - 11544.418: 99.3121% ( 4) 00:09:44.010 11544.418 - 11594.831: 99.3268% ( 3) 00:09:44.010 11594.831 - 11645.243: 99.3367% ( 2) 00:09:44.010 11645.243 - 11695.655: 99.3465% ( 2) 00:09:44.010 11695.655 - 11746.068: 99.3563% ( 2) 00:09:44.010 11746.068 - 11796.480: 99.3662% ( 2) 00:09:44.010 11796.480 - 11846.892: 99.3711% ( 1) 00:09:44.010 24903.680 - 25004.505: 99.3956% ( 5) 00:09:44.010 25004.505 - 25105.329: 99.4153% ( 4) 00:09:44.010 25105.329 - 25206.154: 99.4349% ( 4) 00:09:44.010 25206.154 - 25306.978: 99.4546% ( 4) 00:09:44.010 25306.978 - 25407.803: 99.4743% ( 4) 00:09:44.010 25407.803 - 25508.628: 99.4939% ( 4) 00:09:44.010 25508.628 - 25609.452: 99.5136% ( 4) 00:09:44.010 25609.452 - 25710.277: 99.5332% ( 4) 00:09:44.010 25710.277 - 25811.102: 99.5578% ( 5) 00:09:44.010 25811.102 - 26012.751: 99.5971% ( 8) 00:09:44.010 26012.751 - 26214.400: 99.6413% ( 9) 00:09:44.010 26214.400 - 26416.049: 99.6855% ( 9) 00:09:44.010 26416.049 - 26617.698: 99.7298% ( 9) 00:09:44.010 26617.698 - 26819.348: 99.7740% ( 9) 00:09:44.010 26819.348 - 27020.997: 99.8182% ( 9) 00:09:44.010 27020.997 - 27222.646: 99.8673% ( 10) 00:09:44.010 27222.646 - 27424.295: 99.9116% ( 9) 00:09:44.010 27424.295 - 27625.945: 99.9558% ( 9) 00:09:44.010 27625.945 - 27827.594: 99.9951% ( 8) 00:09:44.010 27827.594 - 28029.243: 100.0000% ( 1) 00:09:44.010 00:09:44.010 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:44.010 ============================================================================== 00:09:44.010 Range in us Cumulative IO count 00:09:44.010 4814.375 - 4839.582: 0.0344% ( 7) 00:09:44.010 4839.582 - 4864.788: 0.0737% ( 8) 00:09:44.010 4864.788 - 4889.994: 0.1965% ( 25) 00:09:44.010 4889.994 - 4915.200: 0.3243% ( 26) 00:09:44.010 4915.200 - 4940.406: 0.5061% ( 37) 00:09:44.010 4940.406 - 4965.612: 0.7911% ( 58) 00:09:44.010 4965.612 - 4990.818: 1.0515% ( 53) 00:09:44.010 4990.818 - 5016.025: 1.2873% ( 48) 00:09:44.010 5016.025 - 5041.231: 1.7001% ( 84) 00:09:44.010 5041.231 - 5066.437: 2.3241% ( 127) 00:09:44.010 5066.437 - 5091.643: 3.2184% ( 182) 00:09:44.010 5091.643 - 5116.849: 4.0340% ( 166) 00:09:44.010 5116.849 - 5142.055: 5.1002% ( 217) 00:09:44.010 5142.055 - 5167.262: 6.2353% ( 231) 00:09:44.010 5167.262 - 5192.468: 7.3359% ( 224) 00:09:44.010 5192.468 - 5217.674: 8.3628% ( 209) 00:09:44.010 5217.674 - 5242.880: 9.5519% ( 242) 00:09:44.010 5242.880 - 5268.086: 10.7262% ( 239) 00:09:44.010 5268.086 - 5293.292: 11.7925% ( 217) 00:09:44.010 5293.292 - 5318.498: 12.9766% ( 241) 00:09:44.010 5318.498 - 5343.705: 14.0527% ( 219) 00:09:44.010 5343.705 - 5368.911: 15.1680% ( 227) 00:09:44.010 5368.911 - 5394.117: 16.3227% ( 235) 00:09:44.010 5394.117 - 5419.323: 17.4676% ( 233) 00:09:44.010 5419.323 - 5444.529: 18.6272% ( 236) 00:09:44.010 5444.529 - 5469.735: 19.7917% ( 237) 00:09:44.010 5469.735 - 5494.942: 20.9463% ( 235) 00:09:44.010 5494.942 - 5520.148: 22.0617% ( 227) 00:09:44.010 5520.148 - 5545.354: 23.2901% ( 250) 00:09:44.010 5545.354 - 5570.560: 24.4546% ( 237) 00:09:44.010 5570.560 - 5595.766: 25.6977% ( 253) 00:09:44.010 5595.766 - 5620.972: 26.8229% ( 229) 00:09:44.010 5620.972 - 5646.178: 28.0464% ( 249) 00:09:44.010 5646.178 - 5671.385: 29.1765% ( 230) 00:09:44.010 5671.385 - 5696.591: 30.4098% ( 251) 00:09:44.010 5696.591 - 5721.797: 31.5743% ( 237) 00:09:44.010 5721.797 - 5747.003: 32.7683% ( 243) 00:09:44.010 5747.003 - 5772.209: 33.9377% ( 238) 00:09:44.010 5772.209 - 5797.415: 35.1612% ( 249) 00:09:44.010 5797.415 - 5822.622: 36.3257% ( 237) 00:09:44.010 5822.622 - 5847.828: 37.5442% ( 248) 00:09:44.010 5847.828 - 5873.034: 38.7136% ( 238) 00:09:44.010 5873.034 - 5898.240: 39.9273% ( 247) 00:09:44.010 5898.240 - 5923.446: 41.1409% ( 247) 00:09:44.010 5923.446 - 5948.652: 42.3054% ( 237) 00:09:44.010 5948.652 - 5973.858: 43.4503% ( 233) 00:09:44.010 5973.858 - 5999.065: 44.6000% ( 234) 00:09:44.010 5999.065 - 6024.271: 45.7498% ( 234) 00:09:44.010 6024.271 - 6049.477: 46.9389% ( 242) 00:09:44.010 6049.477 - 6074.683: 48.1083% ( 238) 00:09:44.010 6074.683 - 6099.889: 49.2630% ( 235) 00:09:44.010 6099.889 - 6125.095: 50.4127% ( 234) 00:09:44.010 6125.095 - 6150.302: 51.6067% ( 243) 00:09:44.010 6150.302 - 6175.508: 52.8105% ( 245) 00:09:44.010 6175.508 - 6200.714: 53.9652% ( 235) 00:09:44.010 6200.714 - 6225.920: 55.1051% ( 232) 00:09:44.010 6225.920 - 6251.126: 56.2942% ( 242) 00:09:44.010 6251.126 - 6276.332: 57.4784% ( 241) 00:09:44.010 6276.332 - 6301.538: 58.6576% ( 240) 00:09:44.010 6301.538 - 6326.745: 59.8320% ( 239) 00:09:44.010 6326.745 - 6351.951: 61.0849% ( 255) 00:09:44.010 6351.951 - 6377.157: 62.2592% ( 239) 00:09:44.010 6377.157 - 6402.363: 63.4385% ( 240) 00:09:44.010 6402.363 - 6427.569: 64.6521% ( 247) 00:09:44.010 6427.569 - 6452.775: 65.8756% ( 249) 00:09:44.010 6452.775 - 6503.188: 68.3078% ( 495) 00:09:44.010 6503.188 - 6553.600: 70.7351% ( 494) 00:09:44.010 6553.600 - 6604.012: 73.1918% ( 500) 00:09:44.010 6604.012 - 6654.425: 75.5257% ( 475) 00:09:44.010 6654.425 - 6704.837: 77.9235% ( 488) 00:09:44.010 6704.837 - 6755.249: 80.3312% ( 490) 00:09:44.010 6755.249 - 6805.662: 82.7290% ( 488) 00:09:44.010 6805.662 - 6856.074: 85.0580% ( 474) 00:09:44.010 6856.074 - 6906.486: 87.2150% ( 439) 00:09:44.010 6906.486 - 6956.898: 89.2983% ( 424) 00:09:44.010 6956.898 - 7007.311: 91.1262% ( 372) 00:09:44.010 7007.311 - 7057.723: 92.6494% ( 310) 00:09:44.010 7057.723 - 7108.135: 93.6370% ( 201) 00:09:44.010 7108.135 - 7158.548: 94.2414% ( 123) 00:09:44.010 7158.548 - 7208.960: 94.5362% ( 60) 00:09:44.010 7208.960 - 7259.372: 94.8064% ( 55) 00:09:44.010 7259.372 - 7309.785: 95.0275% ( 45) 00:09:44.010 7309.785 - 7360.197: 95.1897% ( 33) 00:09:44.010 7360.197 - 7410.609: 95.3616% ( 35) 00:09:44.010 7410.609 - 7461.022: 95.4599% ( 20) 00:09:44.010 7461.022 - 7511.434: 95.5631% ( 21) 00:09:44.010 7511.434 - 7561.846: 95.6417% ( 16) 00:09:44.010 7561.846 - 7612.258: 95.7252% ( 17) 00:09:44.010 7612.258 - 7662.671: 95.7940% ( 14) 00:09:44.010 7662.671 - 7713.083: 95.8776% ( 17) 00:09:44.010 7713.083 - 7763.495: 95.9513% ( 15) 00:09:44.010 7763.495 - 7813.908: 96.0348% ( 17) 00:09:44.010 7813.908 - 7864.320: 96.1331% ( 20) 00:09:44.010 7864.320 - 7914.732: 96.2018% ( 14) 00:09:44.010 7914.732 - 7965.145: 96.2756% ( 15) 00:09:44.010 7965.145 - 8015.557: 96.3493% ( 15) 00:09:44.010 8015.557 - 8065.969: 96.4033% ( 11) 00:09:44.010 8065.969 - 8116.382: 96.4574% ( 11) 00:09:44.010 8116.382 - 8166.794: 96.5065% ( 10) 00:09:44.010 8166.794 - 8217.206: 96.5507% ( 9) 00:09:44.010 8217.206 - 8267.618: 96.6244% ( 15) 00:09:44.010 8267.618 - 8318.031: 96.6735% ( 10) 00:09:44.010 8318.031 - 8368.443: 96.7276% ( 11) 00:09:44.010 8368.443 - 8418.855: 96.7915% ( 13) 00:09:44.010 8418.855 - 8469.268: 96.8455% ( 11) 00:09:44.010 8469.268 - 8519.680: 96.8996% ( 11) 00:09:44.010 8519.680 - 8570.092: 96.9733% ( 15) 00:09:44.010 8570.092 - 8620.505: 97.0273% ( 11) 00:09:44.010 8620.505 - 8670.917: 97.0912% ( 13) 00:09:44.010 8670.917 - 8721.329: 97.1403% ( 10) 00:09:44.010 8721.329 - 8771.742: 97.2189% ( 16) 00:09:44.010 8771.742 - 8822.154: 97.2730% ( 11) 00:09:44.010 8822.154 - 8872.566: 97.3221% ( 10) 00:09:44.010 8872.566 - 8922.978: 97.3909% ( 14) 00:09:44.010 8922.978 - 8973.391: 97.4351% ( 9) 00:09:44.010 8973.391 - 9023.803: 97.4941% ( 12) 00:09:44.010 9023.803 - 9074.215: 97.5580% ( 13) 00:09:44.010 9074.215 - 9124.628: 97.6219% ( 13) 00:09:44.010 9124.628 - 9175.040: 97.6808% ( 12) 00:09:44.010 9175.040 - 9225.452: 97.7349% ( 11) 00:09:44.010 9225.452 - 9275.865: 97.7938% ( 12) 00:09:44.010 9275.865 - 9326.277: 97.8528% ( 12) 00:09:44.010 9326.277 - 9376.689: 97.9068% ( 11) 00:09:44.010 9376.689 - 9427.102: 97.9560% ( 10) 00:09:44.010 9427.102 - 9477.514: 98.0149% ( 12) 00:09:44.010 9477.514 - 9527.926: 98.0788% ( 13) 00:09:44.011 9527.926 - 9578.338: 98.1378% ( 12) 00:09:44.011 9578.338 - 9628.751: 98.1771% ( 8) 00:09:44.011 9628.751 - 9679.163: 98.2410% ( 13) 00:09:44.011 9679.163 - 9729.575: 98.2901% ( 10) 00:09:44.011 9729.575 - 9779.988: 98.3294% ( 8) 00:09:44.011 9779.988 - 9830.400: 98.3638% ( 7) 00:09:44.011 9830.400 - 9880.812: 98.4129% ( 10) 00:09:44.011 9880.812 - 9931.225: 98.4572% ( 9) 00:09:44.011 9931.225 - 9981.637: 98.4915% ( 7) 00:09:44.011 9981.637 - 10032.049: 98.5309% ( 8) 00:09:44.011 10032.049 - 10082.462: 98.5702% ( 8) 00:09:44.011 10082.462 - 10132.874: 98.6193% ( 10) 00:09:44.011 10132.874 - 10183.286: 98.6684% ( 10) 00:09:44.011 10183.286 - 10233.698: 98.7028% ( 7) 00:09:44.011 10233.698 - 10284.111: 98.7323% ( 6) 00:09:44.011 10284.111 - 10334.523: 98.7618% ( 6) 00:09:44.011 10334.523 - 10384.935: 98.7864% ( 5) 00:09:44.011 10384.935 - 10435.348: 98.8060% ( 4) 00:09:44.011 10435.348 - 10485.760: 98.8355% ( 6) 00:09:44.011 10485.760 - 10536.172: 98.8551% ( 4) 00:09:44.011 10536.172 - 10586.585: 98.8650% ( 2) 00:09:44.011 10586.585 - 10636.997: 98.8797% ( 3) 00:09:44.011 10636.997 - 10687.409: 98.8895% ( 2) 00:09:44.011 10687.409 - 10737.822: 98.8994% ( 2) 00:09:44.011 10737.822 - 10788.234: 98.9043% ( 1) 00:09:44.011 10788.234 - 10838.646: 98.9141% ( 2) 00:09:44.011 10838.646 - 10889.058: 98.9239% ( 2) 00:09:44.011 10889.058 - 10939.471: 98.9338% ( 2) 00:09:44.011 10939.471 - 10989.883: 98.9387% ( 1) 00:09:44.011 10989.883 - 11040.295: 98.9485% ( 2) 00:09:44.011 11040.295 - 11090.708: 98.9583% ( 2) 00:09:44.011 11090.708 - 11141.120: 98.9682% ( 2) 00:09:44.011 11141.120 - 11191.532: 98.9731% ( 1) 00:09:44.011 11191.532 - 11241.945: 98.9829% ( 2) 00:09:44.011 11241.945 - 11292.357: 98.9927% ( 2) 00:09:44.011 11292.357 - 11342.769: 99.0075% ( 3) 00:09:44.011 11342.769 - 11393.182: 99.0124% ( 1) 00:09:44.011 11393.182 - 11443.594: 99.0173% ( 1) 00:09:44.011 11443.594 - 11494.006: 99.0271% ( 2) 00:09:44.011 11494.006 - 11544.418: 99.0320% ( 1) 00:09:44.011 11544.418 - 11594.831: 99.0419% ( 2) 00:09:44.011 11594.831 - 11645.243: 99.0517% ( 2) 00:09:44.011 11645.243 - 11695.655: 99.0566% ( 1) 00:09:44.011 11695.655 - 11746.068: 99.0664% ( 2) 00:09:44.011 11746.068 - 11796.480: 99.0812% ( 3) 00:09:44.011 11796.480 - 11846.892: 99.0861% ( 1) 00:09:44.011 11846.892 - 11897.305: 99.0959% ( 2) 00:09:44.011 11897.305 - 11947.717: 99.1008% ( 1) 00:09:44.011 11947.717 - 11998.129: 99.1057% ( 1) 00:09:44.011 11998.129 - 12048.542: 99.1205% ( 3) 00:09:44.011 12048.542 - 12098.954: 99.1254% ( 1) 00:09:44.011 12098.954 - 12149.366: 99.1352% ( 2) 00:09:44.011 12149.366 - 12199.778: 99.1450% ( 2) 00:09:44.011 12199.778 - 12250.191: 99.1500% ( 1) 00:09:44.011 12250.191 - 12300.603: 99.1598% ( 2) 00:09:44.011 12300.603 - 12351.015: 99.1696% ( 2) 00:09:44.011 12351.015 - 12401.428: 99.1745% ( 1) 00:09:44.011 12401.428 - 12451.840: 99.1844% ( 2) 00:09:44.011 12451.840 - 12502.252: 99.1942% ( 2) 00:09:44.011 12502.252 - 12552.665: 99.2040% ( 2) 00:09:44.011 12552.665 - 12603.077: 99.2138% ( 2) 00:09:44.011 12603.077 - 12653.489: 99.2188% ( 1) 00:09:44.011 12653.489 - 12703.902: 99.2286% ( 2) 00:09:44.011 12703.902 - 12754.314: 99.2384% ( 2) 00:09:44.011 12754.314 - 12804.726: 99.2433% ( 1) 00:09:44.011 12804.726 - 12855.138: 99.2531% ( 2) 00:09:44.011 12855.138 - 12905.551: 99.2630% ( 2) 00:09:44.011 12905.551 - 13006.375: 99.2777% ( 3) 00:09:44.011 13006.375 - 13107.200: 99.2974% ( 4) 00:09:44.011 13107.200 - 13208.025: 99.3121% ( 3) 00:09:44.011 13208.025 - 13308.849: 99.3318% ( 4) 00:09:44.011 13308.849 - 13409.674: 99.3465% ( 3) 00:09:44.011 13409.674 - 13510.498: 99.3662% ( 4) 00:09:44.011 13510.498 - 13611.323: 99.3711% ( 1) 00:09:44.011 23592.960 - 23693.785: 99.3760% ( 1) 00:09:44.011 23693.785 - 23794.609: 99.3907% ( 3) 00:09:44.011 23794.609 - 23895.434: 99.4104% ( 4) 00:09:44.011 23895.434 - 23996.258: 99.4300% ( 4) 00:09:44.011 23996.258 - 24097.083: 99.4448% ( 3) 00:09:44.011 24097.083 - 24197.908: 99.4644% ( 4) 00:09:44.011 24197.908 - 24298.732: 99.4841% ( 4) 00:09:44.011 24298.732 - 24399.557: 99.5037% ( 4) 00:09:44.011 24399.557 - 24500.382: 99.5234% ( 4) 00:09:44.011 24500.382 - 24601.206: 99.5430% ( 4) 00:09:44.011 24601.206 - 24702.031: 99.5578% ( 3) 00:09:44.011 24702.031 - 24802.855: 99.5774% ( 4) 00:09:44.011 24802.855 - 24903.680: 99.5971% ( 4) 00:09:44.011 24903.680 - 25004.505: 99.6167% ( 4) 00:09:44.011 25004.505 - 25105.329: 99.6364% ( 4) 00:09:44.011 25105.329 - 25206.154: 99.6561% ( 4) 00:09:44.011 25206.154 - 25306.978: 99.6757% ( 4) 00:09:44.011 25306.978 - 25407.803: 99.6954% ( 4) 00:09:44.011 25407.803 - 25508.628: 99.7150% ( 4) 00:09:44.011 25508.628 - 25609.452: 99.7298% ( 3) 00:09:44.011 25609.452 - 25710.277: 99.7494% ( 4) 00:09:44.011 25710.277 - 25811.102: 99.7691% ( 4) 00:09:44.011 25811.102 - 26012.751: 99.8084% ( 8) 00:09:44.011 26012.751 - 26214.400: 99.8477% ( 8) 00:09:44.011 26214.400 - 26416.049: 99.8870% ( 8) 00:09:44.011 26416.049 - 26617.698: 99.9263% ( 8) 00:09:44.011 26617.698 - 26819.348: 99.9656% ( 8) 00:09:44.011 26819.348 - 27020.997: 100.0000% ( 7) 00:09:44.011 00:09:44.011 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:44.011 ============================================================================== 00:09:44.011 Range in us Cumulative IO count 00:09:44.011 4990.818 - 5016.025: 0.0540% ( 11) 00:09:44.011 5016.025 - 5041.231: 0.1228% ( 14) 00:09:44.011 5041.231 - 5066.437: 0.2899% ( 34) 00:09:44.011 5066.437 - 5091.643: 0.4864% ( 40) 00:09:44.011 5091.643 - 5116.849: 0.7518% ( 54) 00:09:44.011 5116.849 - 5142.055: 1.0220% ( 55) 00:09:44.011 5142.055 - 5167.262: 1.4397% ( 85) 00:09:44.011 5167.262 - 5192.468: 1.9114% ( 96) 00:09:44.011 5192.468 - 5217.674: 2.5550% ( 131) 00:09:44.011 5217.674 - 5242.880: 3.3510% ( 162) 00:09:44.011 5242.880 - 5268.086: 4.4860% ( 231) 00:09:44.011 5268.086 - 5293.292: 5.6751% ( 242) 00:09:44.011 5293.292 - 5318.498: 6.8740% ( 244) 00:09:44.011 5318.498 - 5343.705: 8.1564% ( 261) 00:09:44.011 5343.705 - 5368.911: 9.4094% ( 255) 00:09:44.011 5368.911 - 5394.117: 10.6967% ( 262) 00:09:44.011 5394.117 - 5419.323: 11.9546% ( 256) 00:09:44.011 5419.323 - 5444.529: 13.2862% ( 271) 00:09:44.011 5444.529 - 5469.735: 14.6816% ( 284) 00:09:44.011 5469.735 - 5494.942: 16.0869% ( 286) 00:09:44.011 5494.942 - 5520.148: 17.4676% ( 281) 00:09:44.011 5520.148 - 5545.354: 18.8483% ( 281) 00:09:44.011 5545.354 - 5570.560: 20.1946% ( 274) 00:09:44.011 5570.560 - 5595.766: 21.5753% ( 281) 00:09:44.011 5595.766 - 5620.972: 22.9707% ( 284) 00:09:44.011 5620.972 - 5646.178: 24.3318% ( 277) 00:09:44.011 5646.178 - 5671.385: 25.7567% ( 290) 00:09:44.011 5671.385 - 5696.591: 27.1226% ( 278) 00:09:44.011 5696.591 - 5721.797: 28.5181% ( 284) 00:09:44.011 5721.797 - 5747.003: 29.8988% ( 281) 00:09:44.011 5747.003 - 5772.209: 31.2746% ( 280) 00:09:44.011 5772.209 - 5797.415: 32.6258% ( 275) 00:09:44.011 5797.415 - 5822.622: 34.0016% ( 280) 00:09:44.011 5822.622 - 5847.828: 35.3872% ( 282) 00:09:44.011 5847.828 - 5873.034: 36.8023% ( 288) 00:09:44.011 5873.034 - 5898.240: 38.1830% ( 281) 00:09:44.011 5898.240 - 5923.446: 39.5489% ( 278) 00:09:44.011 5923.446 - 5948.652: 40.9346% ( 282) 00:09:44.011 5948.652 - 5973.858: 42.3398% ( 286) 00:09:44.011 5973.858 - 5999.065: 43.7353% ( 284) 00:09:44.011 5999.065 - 6024.271: 45.1749% ( 293) 00:09:44.011 6024.271 - 6049.477: 46.5851% ( 287) 00:09:44.011 6049.477 - 6074.683: 48.0051% ( 289) 00:09:44.011 6074.683 - 6099.889: 49.4153% ( 287) 00:09:44.011 6099.889 - 6125.095: 50.8648% ( 295) 00:09:44.011 6125.095 - 6150.302: 52.2700% ( 286) 00:09:44.011 6150.302 - 6175.508: 53.6802% ( 287) 00:09:44.011 6175.508 - 6200.714: 55.0904% ( 287) 00:09:44.011 6200.714 - 6225.920: 56.5104% ( 289) 00:09:44.011 6225.920 - 6251.126: 57.9255% ( 288) 00:09:44.011 6251.126 - 6276.332: 59.3357% ( 287) 00:09:44.011 6276.332 - 6301.538: 60.7754% ( 293) 00:09:44.011 6301.538 - 6326.745: 62.1904% ( 288) 00:09:44.011 6326.745 - 6351.951: 63.6399% ( 295) 00:09:44.011 6351.951 - 6377.157: 65.0501% ( 287) 00:09:44.011 6377.157 - 6402.363: 66.4898% ( 293) 00:09:44.011 6402.363 - 6427.569: 67.8803% ( 283) 00:09:44.011 6427.569 - 6452.775: 69.3101% ( 291) 00:09:44.011 6452.775 - 6503.188: 72.1502% ( 578) 00:09:44.011 6503.188 - 6553.600: 75.0098% ( 582) 00:09:44.011 6553.600 - 6604.012: 77.8400% ( 576) 00:09:44.011 6604.012 - 6654.425: 80.7193% ( 586) 00:09:44.011 6654.425 - 6704.837: 83.4119% ( 548) 00:09:44.011 6704.837 - 6755.249: 85.9670% ( 520) 00:09:44.011 6755.249 - 6805.662: 88.3058% ( 476) 00:09:44.011 6805.662 - 6856.074: 90.4629% ( 439) 00:09:44.011 6856.074 - 6906.486: 92.1531% ( 344) 00:09:44.011 6906.486 - 6956.898: 93.2537% ( 224) 00:09:44.011 6956.898 - 7007.311: 93.9072% ( 133) 00:09:44.011 7007.311 - 7057.723: 94.2807% ( 76) 00:09:44.011 7057.723 - 7108.135: 94.5951% ( 64) 00:09:44.011 7108.135 - 7158.548: 94.8162% ( 45) 00:09:44.011 7158.548 - 7208.960: 95.0226% ( 42) 00:09:44.011 7208.960 - 7259.372: 95.1847% ( 33) 00:09:44.011 7259.372 - 7309.785: 95.3076% ( 25) 00:09:44.011 7309.785 - 7360.197: 95.4304% ( 25) 00:09:44.011 7360.197 - 7410.609: 95.5533% ( 25) 00:09:44.011 7410.609 - 7461.022: 95.6958% ( 29) 00:09:44.011 7461.022 - 7511.434: 95.8039% ( 22) 00:09:44.011 7511.434 - 7561.846: 95.9169% ( 23) 00:09:44.011 7561.846 - 7612.258: 96.0299% ( 23) 00:09:44.011 7612.258 - 7662.671: 96.1331% ( 21) 00:09:44.011 7662.671 - 7713.083: 96.2362% ( 21) 00:09:44.011 7713.083 - 7763.495: 96.3394% ( 21) 00:09:44.012 7763.495 - 7813.908: 96.4279% ( 18) 00:09:44.012 7813.908 - 7864.320: 96.4967% ( 14) 00:09:44.012 7864.320 - 7914.732: 96.5654% ( 14) 00:09:44.012 7914.732 - 7965.145: 96.6293% ( 13) 00:09:44.012 7965.145 - 8015.557: 96.6932% ( 13) 00:09:44.012 8015.557 - 8065.969: 96.7571% ( 13) 00:09:44.012 8065.969 - 8116.382: 96.8308% ( 15) 00:09:44.012 8116.382 - 8166.794: 96.8947% ( 13) 00:09:44.012 8166.794 - 8217.206: 96.9585% ( 13) 00:09:44.012 8217.206 - 8267.618: 97.0224% ( 13) 00:09:44.012 8267.618 - 8318.031: 97.0912% ( 14) 00:09:44.012 8318.031 - 8368.443: 97.1551% ( 13) 00:09:44.012 8368.443 - 8418.855: 97.2288% ( 15) 00:09:44.012 8418.855 - 8469.268: 97.3074% ( 16) 00:09:44.012 8469.268 - 8519.680: 97.3860% ( 16) 00:09:44.012 8519.680 - 8570.092: 97.4646% ( 16) 00:09:44.012 8570.092 - 8620.505: 97.5383% ( 15) 00:09:44.012 8620.505 - 8670.917: 97.6071% ( 14) 00:09:44.012 8670.917 - 8721.329: 97.6710% ( 13) 00:09:44.012 8721.329 - 8771.742: 97.7250% ( 11) 00:09:44.012 8771.742 - 8822.154: 97.7840% ( 12) 00:09:44.012 8822.154 - 8872.566: 97.8430% ( 12) 00:09:44.012 8872.566 - 8922.978: 97.9068% ( 13) 00:09:44.012 8922.978 - 8973.391: 97.9707% ( 13) 00:09:44.012 8973.391 - 9023.803: 98.0051% ( 7) 00:09:44.012 9023.803 - 9074.215: 98.0395% ( 7) 00:09:44.012 9074.215 - 9124.628: 98.0739% ( 7) 00:09:44.012 9124.628 - 9175.040: 98.1034% ( 6) 00:09:44.012 9175.040 - 9225.452: 98.1329% ( 6) 00:09:44.012 9225.452 - 9275.865: 98.1673% ( 7) 00:09:44.012 9275.865 - 9326.277: 98.1918% ( 5) 00:09:44.012 9326.277 - 9376.689: 98.2213% ( 6) 00:09:44.012 9376.689 - 9427.102: 98.2557% ( 7) 00:09:44.012 9427.102 - 9477.514: 98.2803% ( 5) 00:09:44.012 9477.514 - 9527.926: 98.3097% ( 6) 00:09:44.012 9527.926 - 9578.338: 98.3441% ( 7) 00:09:44.012 9578.338 - 9628.751: 98.3736% ( 6) 00:09:44.012 9628.751 - 9679.163: 98.4080% ( 7) 00:09:44.012 9679.163 - 9729.575: 98.4326% ( 5) 00:09:44.012 9729.575 - 9779.988: 98.4621% ( 6) 00:09:44.012 9779.988 - 9830.400: 98.4965% ( 7) 00:09:44.012 9830.400 - 9880.812: 98.5210% ( 5) 00:09:44.012 9880.812 - 9931.225: 98.5505% ( 6) 00:09:44.012 9931.225 - 9981.637: 98.5800% ( 6) 00:09:44.012 9981.637 - 10032.049: 98.6144% ( 7) 00:09:44.012 10032.049 - 10082.462: 98.6439% ( 6) 00:09:44.012 10082.462 - 10132.874: 98.6635% ( 4) 00:09:44.012 10132.874 - 10183.286: 98.6733% ( 2) 00:09:44.012 10183.286 - 10233.698: 98.6832% ( 2) 00:09:44.012 10233.698 - 10284.111: 98.6979% ( 3) 00:09:44.012 10284.111 - 10334.523: 98.7077% ( 2) 00:09:44.012 10334.523 - 10384.935: 98.7176% ( 2) 00:09:44.012 10384.935 - 10435.348: 98.7323% ( 3) 00:09:44.012 10435.348 - 10485.760: 98.7421% ( 2) 00:09:44.012 11746.068 - 11796.480: 98.7471% ( 1) 00:09:44.012 11796.480 - 11846.892: 98.7569% ( 2) 00:09:44.012 11846.892 - 11897.305: 98.7716% ( 3) 00:09:44.012 11897.305 - 11947.717: 98.7765% ( 1) 00:09:44.012 11947.717 - 11998.129: 98.7864% ( 2) 00:09:44.012 11998.129 - 12048.542: 98.7962% ( 2) 00:09:44.012 12048.542 - 12098.954: 98.8109% ( 3) 00:09:44.012 12098.954 - 12149.366: 98.8208% ( 2) 00:09:44.012 12149.366 - 12199.778: 98.8355% ( 3) 00:09:44.012 12199.778 - 12250.191: 98.8453% ( 2) 00:09:44.012 12250.191 - 12300.603: 98.8502% ( 1) 00:09:44.012 12300.603 - 12351.015: 98.8551% ( 1) 00:09:44.012 12351.015 - 12401.428: 98.8650% ( 2) 00:09:44.012 12401.428 - 12451.840: 98.8748% ( 2) 00:09:44.012 12451.840 - 12502.252: 98.8846% ( 2) 00:09:44.012 12502.252 - 12552.665: 98.8945% ( 2) 00:09:44.012 12552.665 - 12603.077: 98.9043% ( 2) 00:09:44.012 12603.077 - 12653.489: 98.9141% ( 2) 00:09:44.012 12653.489 - 12703.902: 98.9239% ( 2) 00:09:44.012 12703.902 - 12754.314: 98.9338% ( 2) 00:09:44.012 12754.314 - 12804.726: 98.9485% ( 3) 00:09:44.012 12804.726 - 12855.138: 98.9583% ( 2) 00:09:44.012 12855.138 - 12905.551: 98.9682% ( 2) 00:09:44.012 12905.551 - 13006.375: 98.9878% ( 4) 00:09:44.012 13006.375 - 13107.200: 99.0075% ( 4) 00:09:44.012 13107.200 - 13208.025: 99.0271% ( 4) 00:09:44.012 13208.025 - 13308.849: 99.0468% ( 4) 00:09:44.012 13308.849 - 13409.674: 99.0713% ( 5) 00:09:44.012 13409.674 - 13510.498: 99.0910% ( 4) 00:09:44.012 13510.498 - 13611.323: 99.1057% ( 3) 00:09:44.012 13611.323 - 13712.148: 99.1254% ( 4) 00:09:44.012 13712.148 - 13812.972: 99.1450% ( 4) 00:09:44.012 13812.972 - 13913.797: 99.1696% ( 5) 00:09:44.012 13913.797 - 14014.622: 99.1893% ( 4) 00:09:44.012 14014.622 - 14115.446: 99.2089% ( 4) 00:09:44.012 14115.446 - 14216.271: 99.2286% ( 4) 00:09:44.012 14216.271 - 14317.095: 99.2531% ( 5) 00:09:44.012 14317.095 - 14417.920: 99.2728% ( 4) 00:09:44.012 14417.920 - 14518.745: 99.2925% ( 4) 00:09:44.012 14518.745 - 14619.569: 99.3121% ( 4) 00:09:44.012 14619.569 - 14720.394: 99.3367% ( 5) 00:09:44.012 14720.394 - 14821.218: 99.3563% ( 4) 00:09:44.012 14821.218 - 14922.043: 99.3711% ( 3) 00:09:44.012 22383.065 - 22483.889: 99.3760% ( 1) 00:09:44.012 22483.889 - 22584.714: 99.3956% ( 4) 00:09:44.012 22584.714 - 22685.538: 99.4153% ( 4) 00:09:44.012 22685.538 - 22786.363: 99.4349% ( 4) 00:09:44.012 22786.363 - 22887.188: 99.4546% ( 4) 00:09:44.012 22887.188 - 22988.012: 99.4792% ( 5) 00:09:44.012 22988.012 - 23088.837: 99.4988% ( 4) 00:09:44.012 23088.837 - 23189.662: 99.5185% ( 4) 00:09:44.012 23189.662 - 23290.486: 99.5381% ( 4) 00:09:44.012 23290.486 - 23391.311: 99.5578% ( 4) 00:09:44.012 23391.311 - 23492.135: 99.5824% ( 5) 00:09:44.012 23492.135 - 23592.960: 99.6020% ( 4) 00:09:44.012 23592.960 - 23693.785: 99.6217% ( 4) 00:09:44.012 23693.785 - 23794.609: 99.6413% ( 4) 00:09:44.012 23794.609 - 23895.434: 99.6610% ( 4) 00:09:44.012 23895.434 - 23996.258: 99.6806% ( 4) 00:09:44.012 23996.258 - 24097.083: 99.7003% ( 4) 00:09:44.012 24097.083 - 24197.908: 99.7199% ( 4) 00:09:44.012 24197.908 - 24298.732: 99.7396% ( 4) 00:09:44.012 24298.732 - 24399.557: 99.7592% ( 4) 00:09:44.012 24399.557 - 24500.382: 99.7789% ( 4) 00:09:44.012 24500.382 - 24601.206: 99.7985% ( 4) 00:09:44.012 24601.206 - 24702.031: 99.8182% ( 4) 00:09:44.012 24702.031 - 24802.855: 99.8379% ( 4) 00:09:44.012 24802.855 - 24903.680: 99.8624% ( 5) 00:09:44.012 24903.680 - 25004.505: 99.8821% ( 4) 00:09:44.012 25004.505 - 25105.329: 99.9017% ( 4) 00:09:44.012 25105.329 - 25206.154: 99.9214% ( 4) 00:09:44.012 25206.154 - 25306.978: 99.9410% ( 4) 00:09:44.012 25306.978 - 25407.803: 99.9656% ( 5) 00:09:44.012 25407.803 - 25508.628: 99.9853% ( 4) 00:09:44.012 25508.628 - 25609.452: 100.0000% ( 3) 00:09:44.012 00:09:44.012 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:44.012 ============================================================================== 00:09:44.012 Range in us Cumulative IO count 00:09:44.012 4990.818 - 5016.025: 0.0786% ( 16) 00:09:44.012 5016.025 - 5041.231: 0.1523% ( 15) 00:09:44.012 5041.231 - 5066.437: 0.2653% ( 23) 00:09:44.012 5066.437 - 5091.643: 0.4864% ( 45) 00:09:44.012 5091.643 - 5116.849: 0.6928% ( 42) 00:09:44.012 5116.849 - 5142.055: 1.0073% ( 64) 00:09:44.012 5142.055 - 5167.262: 1.3610% ( 72) 00:09:44.012 5167.262 - 5192.468: 1.8278% ( 95) 00:09:44.012 5192.468 - 5217.674: 2.5501% ( 147) 00:09:44.012 5217.674 - 5242.880: 3.4198% ( 177) 00:09:44.012 5242.880 - 5268.086: 4.4664% ( 213) 00:09:44.012 5268.086 - 5293.292: 5.6162% ( 234) 00:09:44.012 5293.292 - 5318.498: 6.8691% ( 255) 00:09:44.012 5318.498 - 5343.705: 8.1073% ( 252) 00:09:44.012 5343.705 - 5368.911: 9.4585% ( 275) 00:09:44.012 5368.911 - 5394.117: 10.8392% ( 281) 00:09:44.012 5394.117 - 5419.323: 12.1167% ( 260) 00:09:44.012 5419.323 - 5444.529: 13.5024% ( 282) 00:09:44.012 5444.529 - 5469.735: 14.8781% ( 280) 00:09:44.012 5469.735 - 5494.942: 16.2490% ( 279) 00:09:44.012 5494.942 - 5520.148: 17.6248% ( 280) 00:09:44.012 5520.148 - 5545.354: 19.0055% ( 281) 00:09:44.012 5545.354 - 5570.560: 20.3911% ( 282) 00:09:44.012 5570.560 - 5595.766: 21.7866% ( 284) 00:09:44.012 5595.766 - 5620.972: 23.1722% ( 282) 00:09:44.012 5620.972 - 5646.178: 24.5480% ( 280) 00:09:44.012 5646.178 - 5671.385: 25.9483% ( 285) 00:09:44.012 5671.385 - 5696.591: 27.3487% ( 285) 00:09:44.012 5696.591 - 5721.797: 28.7244% ( 280) 00:09:44.012 5721.797 - 5747.003: 30.0904% ( 278) 00:09:44.012 5747.003 - 5772.209: 31.5006% ( 287) 00:09:44.012 5772.209 - 5797.415: 32.9108% ( 287) 00:09:44.012 5797.415 - 5822.622: 34.2816% ( 279) 00:09:44.012 5822.622 - 5847.828: 35.6525% ( 279) 00:09:44.012 5847.828 - 5873.034: 37.0234% ( 279) 00:09:44.012 5873.034 - 5898.240: 38.4483% ( 290) 00:09:44.012 5898.240 - 5923.446: 39.8634% ( 288) 00:09:44.012 5923.446 - 5948.652: 41.2343% ( 279) 00:09:44.012 5948.652 - 5973.858: 42.5855% ( 275) 00:09:44.012 5973.858 - 5999.065: 43.9809% ( 284) 00:09:44.012 5999.065 - 6024.271: 45.4059% ( 290) 00:09:44.012 6024.271 - 6049.477: 46.7816% ( 280) 00:09:44.012 6049.477 - 6074.683: 48.1673% ( 282) 00:09:44.012 6074.683 - 6099.889: 49.5873% ( 289) 00:09:44.012 6099.889 - 6125.095: 50.9827% ( 284) 00:09:44.012 6125.095 - 6150.302: 52.3978% ( 288) 00:09:44.013 6150.302 - 6175.508: 53.7982% ( 285) 00:09:44.013 6175.508 - 6200.714: 55.2083% ( 287) 00:09:44.013 6200.714 - 6225.920: 56.6234% ( 288) 00:09:44.013 6225.920 - 6251.126: 57.9943% ( 279) 00:09:44.013 6251.126 - 6276.332: 59.4045% ( 287) 00:09:44.013 6276.332 - 6301.538: 60.8294% ( 290) 00:09:44.013 6301.538 - 6326.745: 62.2150% ( 282) 00:09:44.013 6326.745 - 6351.951: 63.6301% ( 288) 00:09:44.013 6351.951 - 6377.157: 65.0059% ( 280) 00:09:44.013 6377.157 - 6402.363: 66.4357% ( 291) 00:09:44.013 6402.363 - 6427.569: 67.8607% ( 290) 00:09:44.013 6427.569 - 6452.775: 69.2659% ( 286) 00:09:44.013 6452.775 - 6503.188: 72.0765% ( 572) 00:09:44.013 6503.188 - 6553.600: 74.9165% ( 578) 00:09:44.013 6553.600 - 6604.012: 77.7467% ( 576) 00:09:44.013 6604.012 - 6654.425: 80.5572% ( 572) 00:09:44.013 6654.425 - 6704.837: 83.2301% ( 544) 00:09:44.013 6704.837 - 6755.249: 85.7655% ( 516) 00:09:44.013 6755.249 - 6805.662: 88.0896% ( 473) 00:09:44.013 6805.662 - 6856.074: 90.1435% ( 418) 00:09:44.013 6856.074 - 6906.486: 91.8386% ( 345) 00:09:44.013 6906.486 - 6956.898: 92.9344% ( 223) 00:09:44.013 6956.898 - 7007.311: 93.5535% ( 126) 00:09:44.013 7007.311 - 7057.723: 93.9858% ( 88) 00:09:44.013 7057.723 - 7108.135: 94.3151% ( 67) 00:09:44.013 7108.135 - 7158.548: 94.6099% ( 60) 00:09:44.013 7158.548 - 7208.960: 94.8555% ( 50) 00:09:44.013 7208.960 - 7259.372: 95.0668% ( 43) 00:09:44.013 7259.372 - 7309.785: 95.2585% ( 39) 00:09:44.013 7309.785 - 7360.197: 95.4157% ( 32) 00:09:44.013 7360.197 - 7410.609: 95.5533% ( 28) 00:09:44.013 7410.609 - 7461.022: 95.6712% ( 24) 00:09:44.013 7461.022 - 7511.434: 95.7940% ( 25) 00:09:44.013 7511.434 - 7561.846: 95.9119% ( 24) 00:09:44.013 7561.846 - 7612.258: 96.0250% ( 23) 00:09:44.013 7612.258 - 7662.671: 96.1281% ( 21) 00:09:44.013 7662.671 - 7713.083: 96.2264% ( 20) 00:09:44.013 7713.083 - 7763.495: 96.3296% ( 21) 00:09:44.013 7763.495 - 7813.908: 96.4377% ( 22) 00:09:44.013 7813.908 - 7864.320: 96.5212% ( 17) 00:09:44.013 7864.320 - 7914.732: 96.6342% ( 23) 00:09:44.013 7914.732 - 7965.145: 96.7472% ( 23) 00:09:44.013 7965.145 - 8015.557: 96.8553% ( 22) 00:09:44.013 8015.557 - 8065.969: 96.9634% ( 22) 00:09:44.013 8065.969 - 8116.382: 97.0814% ( 24) 00:09:44.013 8116.382 - 8166.794: 97.1993% ( 24) 00:09:44.013 8166.794 - 8217.206: 97.3221% ( 25) 00:09:44.013 8217.206 - 8267.618: 97.4351% ( 23) 00:09:44.013 8267.618 - 8318.031: 97.5531% ( 24) 00:09:44.013 8318.031 - 8368.443: 97.6710% ( 24) 00:09:44.013 8368.443 - 8418.855: 97.7791% ( 22) 00:09:44.013 8418.855 - 8469.268: 97.8577% ( 16) 00:09:44.013 8469.268 - 8519.680: 97.9265% ( 14) 00:09:44.013 8519.680 - 8570.092: 98.0002% ( 15) 00:09:44.013 8570.092 - 8620.505: 98.0542% ( 11) 00:09:44.013 8620.505 - 8670.917: 98.0985% ( 9) 00:09:44.013 8670.917 - 8721.329: 98.1427% ( 9) 00:09:44.013 8721.329 - 8771.742: 98.1771% ( 7) 00:09:44.013 8771.742 - 8822.154: 98.2262% ( 10) 00:09:44.013 8822.154 - 8872.566: 98.2606% ( 7) 00:09:44.013 8872.566 - 8922.978: 98.3097% ( 10) 00:09:44.013 8922.978 - 8973.391: 98.3196% ( 2) 00:09:44.013 8973.391 - 9023.803: 98.3343% ( 3) 00:09:44.013 9023.803 - 9074.215: 98.3441% ( 2) 00:09:44.013 9074.215 - 9124.628: 98.3540% ( 2) 00:09:44.013 9124.628 - 9175.040: 98.3687% ( 3) 00:09:44.013 9175.040 - 9225.452: 98.3785% ( 2) 00:09:44.013 9225.452 - 9275.865: 98.3933% ( 3) 00:09:44.013 9275.865 - 9326.277: 98.4031% ( 2) 00:09:44.013 9326.277 - 9376.689: 98.4129% ( 2) 00:09:44.013 9376.689 - 9427.102: 98.4277% ( 3) 00:09:44.013 9427.102 - 9477.514: 98.4375% ( 2) 00:09:44.013 9477.514 - 9527.926: 98.4473% ( 2) 00:09:44.013 9527.926 - 9578.338: 98.4621% ( 3) 00:09:44.013 9578.338 - 9628.751: 98.4719% ( 2) 00:09:44.013 9628.751 - 9679.163: 98.4866% ( 3) 00:09:44.013 9679.163 - 9729.575: 98.4965% ( 2) 00:09:44.013 9729.575 - 9779.988: 98.5063% ( 2) 00:09:44.013 9779.988 - 9830.400: 98.5210% ( 3) 00:09:44.013 9830.400 - 9880.812: 98.5309% ( 2) 00:09:44.013 9880.812 - 9931.225: 98.5456% ( 3) 00:09:44.013 9931.225 - 9981.637: 98.5554% ( 2) 00:09:44.013 9981.637 - 10032.049: 98.5653% ( 2) 00:09:44.013 10032.049 - 10082.462: 98.5800% ( 3) 00:09:44.013 10082.462 - 10132.874: 98.5898% ( 2) 00:09:44.013 10132.874 - 10183.286: 98.6046% ( 3) 00:09:44.013 10183.286 - 10233.698: 98.6144% ( 2) 00:09:44.013 10233.698 - 10284.111: 98.6291% ( 3) 00:09:44.013 10284.111 - 10334.523: 98.6340% ( 1) 00:09:44.013 10334.523 - 10384.935: 98.6488% ( 3) 00:09:44.013 10384.935 - 10435.348: 98.6586% ( 2) 00:09:44.013 10435.348 - 10485.760: 98.6733% ( 3) 00:09:44.013 10485.760 - 10536.172: 98.6832% ( 2) 00:09:44.013 10536.172 - 10586.585: 98.6930% ( 2) 00:09:44.013 10586.585 - 10636.997: 98.7028% ( 2) 00:09:44.013 10636.997 - 10687.409: 98.7176% ( 3) 00:09:44.013 10687.409 - 10737.822: 98.7274% ( 2) 00:09:44.013 10737.822 - 10788.234: 98.7421% ( 3) 00:09:44.013 12603.077 - 12653.489: 98.7520% ( 2) 00:09:44.013 12653.489 - 12703.902: 98.7716% ( 4) 00:09:44.013 12703.902 - 12754.314: 98.7765% ( 1) 00:09:44.013 12754.314 - 12804.726: 98.7864% ( 2) 00:09:44.013 12804.726 - 12855.138: 98.7962% ( 2) 00:09:44.013 12855.138 - 12905.551: 98.8060% ( 2) 00:09:44.013 12905.551 - 13006.375: 98.8208% ( 3) 00:09:44.013 13006.375 - 13107.200: 98.8404% ( 4) 00:09:44.013 13107.200 - 13208.025: 98.8502% ( 2) 00:09:44.013 13208.025 - 13308.849: 98.8650% ( 3) 00:09:44.013 13308.849 - 13409.674: 98.8846% ( 4) 00:09:44.013 13409.674 - 13510.498: 98.9043% ( 4) 00:09:44.013 13510.498 - 13611.323: 98.9239% ( 4) 00:09:44.013 13611.323 - 13712.148: 98.9632% ( 8) 00:09:44.013 13712.148 - 13812.972: 99.0026% ( 8) 00:09:44.013 13812.972 - 13913.797: 99.0419% ( 8) 00:09:44.013 13913.797 - 14014.622: 99.0763% ( 7) 00:09:44.013 14014.622 - 14115.446: 99.1107% ( 7) 00:09:44.013 14115.446 - 14216.271: 99.1500% ( 8) 00:09:44.013 14216.271 - 14317.095: 99.1893% ( 8) 00:09:44.013 14317.095 - 14417.920: 99.2286% ( 8) 00:09:44.013 14417.920 - 14518.745: 99.2679% ( 8) 00:09:44.013 14518.745 - 14619.569: 99.3023% ( 7) 00:09:44.013 14619.569 - 14720.394: 99.3416% ( 8) 00:09:44.013 14720.394 - 14821.218: 99.3711% ( 6) 00:09:44.013 21878.942 - 21979.766: 99.3809% ( 2) 00:09:44.013 21979.766 - 22080.591: 99.4006% ( 4) 00:09:44.013 22080.591 - 22181.415: 99.4251% ( 5) 00:09:44.013 22181.415 - 22282.240: 99.4448% ( 4) 00:09:44.013 22282.240 - 22383.065: 99.4644% ( 4) 00:09:44.013 22383.065 - 22483.889: 99.4841% ( 4) 00:09:44.013 22483.889 - 22584.714: 99.5037% ( 4) 00:09:44.013 22584.714 - 22685.538: 99.5283% ( 5) 00:09:44.013 22685.538 - 22786.363: 99.5480% ( 4) 00:09:44.013 22786.363 - 22887.188: 99.5676% ( 4) 00:09:44.013 22887.188 - 22988.012: 99.5873% ( 4) 00:09:44.013 22988.012 - 23088.837: 99.6020% ( 3) 00:09:44.013 23088.837 - 23189.662: 99.6266% ( 5) 00:09:44.013 23189.662 - 23290.486: 99.6462% ( 4) 00:09:44.013 23290.486 - 23391.311: 99.6659% ( 4) 00:09:44.013 23391.311 - 23492.135: 99.6904% ( 5) 00:09:44.013 23492.135 - 23592.960: 99.7101% ( 4) 00:09:44.013 23592.960 - 23693.785: 99.7298% ( 4) 00:09:44.013 23693.785 - 23794.609: 99.7494% ( 4) 00:09:44.013 23794.609 - 23895.434: 99.7740% ( 5) 00:09:44.013 23895.434 - 23996.258: 99.7936% ( 4) 00:09:44.013 23996.258 - 24097.083: 99.8133% ( 4) 00:09:44.013 24097.083 - 24197.908: 99.8329% ( 4) 00:09:44.013 24197.908 - 24298.732: 99.8526% ( 4) 00:09:44.013 24298.732 - 24399.557: 99.8722% ( 4) 00:09:44.013 24399.557 - 24500.382: 99.8968% ( 5) 00:09:44.013 24500.382 - 24601.206: 99.9165% ( 4) 00:09:44.013 24601.206 - 24702.031: 99.9361% ( 4) 00:09:44.013 24702.031 - 24802.855: 99.9558% ( 4) 00:09:44.013 24802.855 - 24903.680: 99.9803% ( 5) 00:09:44.013 24903.680 - 25004.505: 100.0000% ( 4) 00:09:44.013 00:09:44.013 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:44.013 ============================================================================== 00:09:44.013 Range in us Cumulative IO count 00:09:44.013 4965.612 - 4990.818: 0.0049% ( 1) 00:09:44.013 4990.818 - 5016.025: 0.0639% ( 12) 00:09:44.013 5016.025 - 5041.231: 0.1425% ( 16) 00:09:44.013 5041.231 - 5066.437: 0.2752% ( 27) 00:09:44.013 5066.437 - 5091.643: 0.5159% ( 49) 00:09:44.014 5091.643 - 5116.849: 0.7370% ( 45) 00:09:44.014 5116.849 - 5142.055: 0.9925% ( 52) 00:09:44.014 5142.055 - 5167.262: 1.3168% ( 66) 00:09:44.014 5167.262 - 5192.468: 1.7738% ( 93) 00:09:44.014 5192.468 - 5217.674: 2.3880% ( 125) 00:09:44.014 5217.674 - 5242.880: 3.1496% ( 155) 00:09:44.014 5242.880 - 5268.086: 4.1519% ( 204) 00:09:44.014 5268.086 - 5293.292: 5.3901% ( 252) 00:09:44.014 5293.292 - 5318.498: 6.6234% ( 251) 00:09:44.014 5318.498 - 5343.705: 7.8960% ( 259) 00:09:44.014 5343.705 - 5368.911: 9.0900% ( 243) 00:09:44.014 5368.911 - 5394.117: 10.3675% ( 260) 00:09:44.014 5394.117 - 5419.323: 11.7384% ( 279) 00:09:44.014 5419.323 - 5444.529: 13.1289% ( 283) 00:09:44.014 5444.529 - 5469.735: 14.4998% ( 279) 00:09:44.014 5469.735 - 5494.942: 15.8461% ( 274) 00:09:44.014 5494.942 - 5520.148: 17.2710% ( 290) 00:09:44.014 5520.148 - 5545.354: 18.7549% ( 302) 00:09:44.014 5545.354 - 5570.560: 20.2142% ( 297) 00:09:44.014 5570.560 - 5595.766: 21.6293% ( 288) 00:09:44.014 5595.766 - 5620.972: 23.0641% ( 292) 00:09:44.014 5620.972 - 5646.178: 24.4693% ( 286) 00:09:44.014 5646.178 - 5671.385: 25.8451% ( 280) 00:09:44.014 5671.385 - 5696.591: 27.2651% ( 289) 00:09:44.014 5696.591 - 5721.797: 28.6557% ( 283) 00:09:44.014 5721.797 - 5747.003: 30.0609% ( 286) 00:09:44.014 5747.003 - 5772.209: 31.4465% ( 282) 00:09:44.014 5772.209 - 5797.415: 32.8223% ( 280) 00:09:44.014 5797.415 - 5822.622: 34.2178% ( 284) 00:09:44.014 5822.622 - 5847.828: 35.5985% ( 281) 00:09:44.014 5847.828 - 5873.034: 36.9693% ( 279) 00:09:44.014 5873.034 - 5898.240: 38.3746% ( 286) 00:09:44.014 5898.240 - 5923.446: 39.7750% ( 285) 00:09:44.014 5923.446 - 5948.652: 41.1704% ( 284) 00:09:44.014 5948.652 - 5973.858: 42.5904% ( 289) 00:09:44.014 5973.858 - 5999.065: 44.0104% ( 289) 00:09:44.014 5999.065 - 6024.271: 45.4059% ( 284) 00:09:44.014 6024.271 - 6049.477: 46.7718% ( 278) 00:09:44.014 6049.477 - 6074.683: 48.1673% ( 284) 00:09:44.014 6074.683 - 6099.889: 49.5430% ( 280) 00:09:44.014 6099.889 - 6125.095: 50.9483% ( 286) 00:09:44.014 6125.095 - 6150.302: 52.3634% ( 288) 00:09:44.014 6150.302 - 6175.508: 53.7834% ( 289) 00:09:44.014 6175.508 - 6200.714: 55.1887% ( 286) 00:09:44.014 6200.714 - 6225.920: 56.5989% ( 287) 00:09:44.014 6225.920 - 6251.126: 58.0434% ( 294) 00:09:44.014 6251.126 - 6276.332: 59.4389% ( 284) 00:09:44.014 6276.332 - 6301.538: 60.8638% ( 290) 00:09:44.014 6301.538 - 6326.745: 62.2445% ( 281) 00:09:44.014 6326.745 - 6351.951: 63.6449% ( 285) 00:09:44.014 6351.951 - 6377.157: 65.0649% ( 289) 00:09:44.014 6377.157 - 6402.363: 66.4996% ( 292) 00:09:44.014 6402.363 - 6427.569: 67.9393% ( 293) 00:09:44.014 6427.569 - 6452.775: 69.3544% ( 288) 00:09:44.014 6452.775 - 6503.188: 72.2288% ( 585) 00:09:44.014 6503.188 - 6553.600: 75.1228% ( 589) 00:09:44.014 6553.600 - 6604.012: 77.9776% ( 581) 00:09:44.014 6604.012 - 6654.425: 80.8127% ( 577) 00:09:44.014 6654.425 - 6704.837: 83.5004% ( 547) 00:09:44.014 6704.837 - 6755.249: 85.9866% ( 506) 00:09:44.014 6755.249 - 6805.662: 88.3255% ( 476) 00:09:44.014 6805.662 - 6856.074: 90.4629% ( 435) 00:09:44.014 6856.074 - 6906.486: 92.0941% ( 332) 00:09:44.014 6906.486 - 6956.898: 93.1506% ( 215) 00:09:44.014 6956.898 - 7007.311: 93.6861% ( 109) 00:09:44.014 7007.311 - 7057.723: 94.0792% ( 80) 00:09:44.014 7057.723 - 7108.135: 94.4084% ( 67) 00:09:44.014 7108.135 - 7158.548: 94.6492% ( 49) 00:09:44.014 7158.548 - 7208.960: 94.8457% ( 40) 00:09:44.014 7208.960 - 7259.372: 95.0029% ( 32) 00:09:44.014 7259.372 - 7309.785: 95.1356% ( 27) 00:09:44.014 7309.785 - 7360.197: 95.2634% ( 26) 00:09:44.014 7360.197 - 7410.609: 95.3715% ( 22) 00:09:44.014 7410.609 - 7461.022: 95.4845% ( 23) 00:09:44.014 7461.022 - 7511.434: 95.5975% ( 23) 00:09:44.014 7511.434 - 7561.846: 95.7007% ( 21) 00:09:44.014 7561.846 - 7612.258: 95.8088% ( 22) 00:09:44.014 7612.258 - 7662.671: 95.9169% ( 22) 00:09:44.014 7662.671 - 7713.083: 96.0299% ( 23) 00:09:44.014 7713.083 - 7763.495: 96.1429% ( 23) 00:09:44.014 7763.495 - 7813.908: 96.2559% ( 23) 00:09:44.014 7813.908 - 7864.320: 96.4082% ( 31) 00:09:44.014 7864.320 - 7914.732: 96.5114% ( 21) 00:09:44.014 7914.732 - 7965.145: 96.6048% ( 19) 00:09:44.014 7965.145 - 8015.557: 96.6785% ( 15) 00:09:44.014 8015.557 - 8065.969: 96.7669% ( 18) 00:09:44.014 8065.969 - 8116.382: 96.8553% ( 18) 00:09:44.014 8116.382 - 8166.794: 96.9241% ( 14) 00:09:44.014 8166.794 - 8217.206: 97.0028% ( 16) 00:09:44.014 8217.206 - 8267.618: 97.0814% ( 16) 00:09:44.014 8267.618 - 8318.031: 97.1600% ( 16) 00:09:44.014 8318.031 - 8368.443: 97.2435% ( 17) 00:09:44.014 8368.443 - 8418.855: 97.3270% ( 17) 00:09:44.014 8418.855 - 8469.268: 97.4106% ( 17) 00:09:44.014 8469.268 - 8519.680: 97.4892% ( 16) 00:09:44.014 8519.680 - 8570.092: 97.5727% ( 17) 00:09:44.014 8570.092 - 8620.505: 97.6513% ( 16) 00:09:44.014 8620.505 - 8670.917: 97.7201% ( 14) 00:09:44.014 8670.917 - 8721.329: 97.7742% ( 11) 00:09:44.014 8721.329 - 8771.742: 97.8135% ( 8) 00:09:44.014 8771.742 - 8822.154: 97.8479% ( 7) 00:09:44.014 8822.154 - 8872.566: 97.8823% ( 7) 00:09:44.014 8872.566 - 8922.978: 97.9167% ( 7) 00:09:44.014 8922.978 - 8973.391: 97.9412% ( 5) 00:09:44.014 8973.391 - 9023.803: 97.9609% ( 4) 00:09:44.014 9023.803 - 9074.215: 97.9805% ( 4) 00:09:44.014 9074.215 - 9124.628: 98.0002% ( 4) 00:09:44.014 9124.628 - 9175.040: 98.0297% ( 6) 00:09:44.014 9175.040 - 9225.452: 98.0542% ( 5) 00:09:44.014 9225.452 - 9275.865: 98.0837% ( 6) 00:09:44.014 9275.865 - 9326.277: 98.1132% ( 6) 00:09:44.014 9326.277 - 9376.689: 98.1378% ( 5) 00:09:44.014 9376.689 - 9427.102: 98.1673% ( 6) 00:09:44.014 9427.102 - 9477.514: 98.1918% ( 5) 00:09:44.014 9477.514 - 9527.926: 98.2017% ( 2) 00:09:44.014 9527.926 - 9578.338: 98.2115% ( 2) 00:09:44.014 9578.338 - 9628.751: 98.2213% ( 2) 00:09:44.014 9628.751 - 9679.163: 98.2360% ( 3) 00:09:44.014 9679.163 - 9729.575: 98.2459% ( 2) 00:09:44.014 9729.575 - 9779.988: 98.2557% ( 2) 00:09:44.014 9779.988 - 9830.400: 98.2704% ( 3) 00:09:44.014 9830.400 - 9880.812: 98.2803% ( 2) 00:09:44.014 9880.812 - 9931.225: 98.2950% ( 3) 00:09:44.014 9931.225 - 9981.637: 98.3048% ( 2) 00:09:44.014 9981.637 - 10032.049: 98.3147% ( 2) 00:09:44.014 10032.049 - 10082.462: 98.3343% ( 4) 00:09:44.014 10082.462 - 10132.874: 98.3540% ( 4) 00:09:44.014 10132.874 - 10183.286: 98.3736% ( 4) 00:09:44.014 10183.286 - 10233.698: 98.3933% ( 4) 00:09:44.014 10233.698 - 10284.111: 98.4129% ( 4) 00:09:44.014 10284.111 - 10334.523: 98.4326% ( 4) 00:09:44.014 10334.523 - 10384.935: 98.4522% ( 4) 00:09:44.014 10384.935 - 10435.348: 98.4670% ( 3) 00:09:44.014 10435.348 - 10485.760: 98.4866% ( 4) 00:09:44.014 10485.760 - 10536.172: 98.5063% ( 4) 00:09:44.014 10536.172 - 10586.585: 98.5210% ( 3) 00:09:44.014 10586.585 - 10636.997: 98.5407% ( 4) 00:09:44.014 10636.997 - 10687.409: 98.5603% ( 4) 00:09:44.014 10687.409 - 10737.822: 98.5800% ( 4) 00:09:44.014 10737.822 - 10788.234: 98.5996% ( 4) 00:09:44.014 10788.234 - 10838.646: 98.6193% ( 4) 00:09:44.014 10838.646 - 10889.058: 98.6390% ( 4) 00:09:44.014 10889.058 - 10939.471: 98.6586% ( 4) 00:09:44.014 10939.471 - 10989.883: 98.6783% ( 4) 00:09:44.014 10989.883 - 11040.295: 98.6979% ( 4) 00:09:44.014 11040.295 - 11090.708: 98.7176% ( 4) 00:09:44.014 11090.708 - 11141.120: 98.7323% ( 3) 00:09:44.014 11141.120 - 11191.532: 98.7421% ( 2) 00:09:44.014 11292.357 - 11342.769: 98.7569% ( 3) 00:09:44.014 11342.769 - 11393.182: 98.7667% ( 2) 00:09:44.014 11393.182 - 11443.594: 98.7716% ( 1) 00:09:44.014 11443.594 - 11494.006: 98.7814% ( 2) 00:09:44.014 11494.006 - 11544.418: 98.7913% ( 2) 00:09:44.014 11544.418 - 11594.831: 98.8060% ( 3) 00:09:44.014 11594.831 - 11645.243: 98.8158% ( 2) 00:09:44.014 11645.243 - 11695.655: 98.8257% ( 2) 00:09:44.014 11695.655 - 11746.068: 98.8355% ( 2) 00:09:44.014 11746.068 - 11796.480: 98.8453% ( 2) 00:09:44.014 11796.480 - 11846.892: 98.8551% ( 2) 00:09:44.014 11846.892 - 11897.305: 98.8650% ( 2) 00:09:44.014 11897.305 - 11947.717: 98.8748% ( 2) 00:09:44.014 11947.717 - 11998.129: 98.8846% ( 2) 00:09:44.014 11998.129 - 12048.542: 98.8945% ( 2) 00:09:44.014 12048.542 - 12098.954: 98.9092% ( 3) 00:09:44.014 12098.954 - 12149.366: 98.9190% ( 2) 00:09:44.014 12149.366 - 12199.778: 98.9289% ( 2) 00:09:44.014 12199.778 - 12250.191: 98.9387% ( 2) 00:09:44.014 12250.191 - 12300.603: 98.9485% ( 2) 00:09:44.014 12300.603 - 12351.015: 98.9583% ( 2) 00:09:44.014 12351.015 - 12401.428: 98.9682% ( 2) 00:09:44.014 12401.428 - 12451.840: 98.9780% ( 2) 00:09:44.014 12451.840 - 12502.252: 98.9878% ( 2) 00:09:44.014 12502.252 - 12552.665: 99.0026% ( 3) 00:09:44.014 12552.665 - 12603.077: 99.0124% ( 2) 00:09:44.014 12603.077 - 12653.489: 99.0222% ( 2) 00:09:44.014 12653.489 - 12703.902: 99.0320% ( 2) 00:09:44.014 12703.902 - 12754.314: 99.0419% ( 2) 00:09:44.014 12754.314 - 12804.726: 99.0468% ( 1) 00:09:44.014 12804.726 - 12855.138: 99.0566% ( 2) 00:09:44.014 12855.138 - 12905.551: 99.0664% ( 2) 00:09:44.014 12905.551 - 13006.375: 99.0910% ( 5) 00:09:44.014 13006.375 - 13107.200: 99.1107% ( 4) 00:09:44.014 13107.200 - 13208.025: 99.1303% ( 4) 00:09:44.014 13208.025 - 13308.849: 99.1450% ( 3) 00:09:44.014 13308.849 - 13409.674: 99.1647% ( 4) 00:09:44.014 13409.674 - 13510.498: 99.1844% ( 4) 00:09:44.014 13510.498 - 13611.323: 99.1942% ( 2) 00:09:44.015 13611.323 - 13712.148: 99.2089% ( 3) 00:09:44.015 13712.148 - 13812.972: 99.2286% ( 4) 00:09:44.015 13812.972 - 13913.797: 99.2482% ( 4) 00:09:44.015 13913.797 - 14014.622: 99.2679% ( 4) 00:09:44.015 14014.622 - 14115.446: 99.2777% ( 2) 00:09:44.015 14115.446 - 14216.271: 99.2974% ( 4) 00:09:44.015 14216.271 - 14317.095: 99.3170% ( 4) 00:09:44.015 14317.095 - 14417.920: 99.3367% ( 4) 00:09:44.015 14417.920 - 14518.745: 99.3514% ( 3) 00:09:44.015 14518.745 - 14619.569: 99.3662% ( 3) 00:09:44.015 14619.569 - 14720.394: 99.3711% ( 1) 00:09:44.015 20568.222 - 20669.046: 99.3858% ( 3) 00:09:44.015 20669.046 - 20769.871: 99.4104% ( 5) 00:09:44.015 20769.871 - 20870.695: 99.4300% ( 4) 00:09:44.015 20870.695 - 20971.520: 99.4497% ( 4) 00:09:44.015 20971.520 - 21072.345: 99.4693% ( 4) 00:09:44.015 21072.345 - 21173.169: 99.4890% ( 4) 00:09:44.015 21173.169 - 21273.994: 99.5136% ( 5) 00:09:44.015 21273.994 - 21374.818: 99.5332% ( 4) 00:09:44.015 21374.818 - 21475.643: 99.5529% ( 4) 00:09:44.015 21475.643 - 21576.468: 99.5725% ( 4) 00:09:44.015 21576.468 - 21677.292: 99.5971% ( 5) 00:09:44.015 21677.292 - 21778.117: 99.6167% ( 4) 00:09:44.015 21778.117 - 21878.942: 99.6364% ( 4) 00:09:44.015 21878.942 - 21979.766: 99.6561% ( 4) 00:09:44.015 21979.766 - 22080.591: 99.6757% ( 4) 00:09:44.015 22080.591 - 22181.415: 99.6954% ( 4) 00:09:44.015 22181.415 - 22282.240: 99.7150% ( 4) 00:09:44.015 22282.240 - 22383.065: 99.7347% ( 4) 00:09:44.015 22383.065 - 22483.889: 99.7543% ( 4) 00:09:44.015 22483.889 - 22584.714: 99.7789% ( 5) 00:09:44.015 22584.714 - 22685.538: 99.7985% ( 4) 00:09:44.015 22685.538 - 22786.363: 99.8182% ( 4) 00:09:44.015 22786.363 - 22887.188: 99.8379% ( 4) 00:09:44.015 22887.188 - 22988.012: 99.8575% ( 4) 00:09:44.015 22988.012 - 23088.837: 99.8821% ( 5) 00:09:44.015 23088.837 - 23189.662: 99.9017% ( 4) 00:09:44.015 23189.662 - 23290.486: 99.9214% ( 4) 00:09:44.015 23290.486 - 23391.311: 99.9410% ( 4) 00:09:44.015 23391.311 - 23492.135: 99.9607% ( 4) 00:09:44.015 23492.135 - 23592.960: 99.9803% ( 4) 00:09:44.015 23592.960 - 23693.785: 100.0000% ( 4) 00:09:44.015 00:09:44.015 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:44.015 ============================================================================== 00:09:44.015 Range in us Cumulative IO count 00:09:44.015 4965.612 - 4990.818: 0.0049% ( 1) 00:09:44.015 4990.818 - 5016.025: 0.0635% ( 12) 00:09:44.015 5016.025 - 5041.231: 0.1416% ( 16) 00:09:44.015 5041.231 - 5066.437: 0.3027% ( 33) 00:09:44.015 5066.437 - 5091.643: 0.5078% ( 42) 00:09:44.015 5091.643 - 5116.849: 0.7178% ( 43) 00:09:44.015 5116.849 - 5142.055: 0.9912% ( 56) 00:09:44.015 5142.055 - 5167.262: 1.3330% ( 70) 00:09:44.015 5167.262 - 5192.468: 1.7773% ( 91) 00:09:44.015 5192.468 - 5217.674: 2.3975% ( 127) 00:09:44.015 5217.674 - 5242.880: 3.2520% ( 175) 00:09:44.015 5242.880 - 5268.086: 4.2725% ( 209) 00:09:44.015 5268.086 - 5293.292: 5.3955% ( 230) 00:09:44.015 5293.292 - 5318.498: 6.6553% ( 258) 00:09:44.015 5318.498 - 5343.705: 7.9590% ( 267) 00:09:44.015 5343.705 - 5368.911: 9.2871% ( 272) 00:09:44.015 5368.911 - 5394.117: 10.6104% ( 271) 00:09:44.015 5394.117 - 5419.323: 11.9385% ( 272) 00:09:44.015 5419.323 - 5444.529: 13.2178% ( 262) 00:09:44.015 5444.529 - 5469.735: 14.5508% ( 273) 00:09:44.015 5469.735 - 5494.942: 15.9277% ( 282) 00:09:44.015 5494.942 - 5520.148: 17.3535% ( 292) 00:09:44.015 5520.148 - 5545.354: 18.7744% ( 291) 00:09:44.015 5545.354 - 5570.560: 20.1416% ( 280) 00:09:44.015 5570.560 - 5595.766: 21.4893% ( 276) 00:09:44.015 5595.766 - 5620.972: 22.8516% ( 279) 00:09:44.015 5620.972 - 5646.178: 24.2480% ( 286) 00:09:44.015 5646.178 - 5671.385: 25.6738% ( 292) 00:09:44.015 5671.385 - 5696.591: 27.1387% ( 300) 00:09:44.015 5696.591 - 5721.797: 28.5010% ( 279) 00:09:44.015 5721.797 - 5747.003: 29.9170% ( 290) 00:09:44.015 5747.003 - 5772.209: 31.3037% ( 284) 00:09:44.015 5772.209 - 5797.415: 32.7148% ( 289) 00:09:44.015 5797.415 - 5822.622: 34.1309% ( 290) 00:09:44.015 5822.622 - 5847.828: 35.5225% ( 285) 00:09:44.015 5847.828 - 5873.034: 36.8994% ( 282) 00:09:44.015 5873.034 - 5898.240: 38.2812% ( 283) 00:09:44.015 5898.240 - 5923.446: 39.6777% ( 286) 00:09:44.015 5923.446 - 5948.652: 41.0645% ( 284) 00:09:44.015 5948.652 - 5973.858: 42.4707% ( 288) 00:09:44.015 5973.858 - 5999.065: 43.9014% ( 293) 00:09:44.015 5999.065 - 6024.271: 45.2979% ( 286) 00:09:44.015 6024.271 - 6049.477: 46.6650% ( 280) 00:09:44.015 6049.477 - 6074.683: 48.0859% ( 291) 00:09:44.015 6074.683 - 6099.889: 49.4922% ( 288) 00:09:44.015 6099.889 - 6125.095: 50.9033% ( 289) 00:09:44.015 6125.095 - 6150.302: 52.2998% ( 286) 00:09:44.015 6150.302 - 6175.508: 53.6914% ( 285) 00:09:44.015 6175.508 - 6200.714: 55.1172% ( 292) 00:09:44.015 6200.714 - 6225.920: 56.5381% ( 291) 00:09:44.015 6225.920 - 6251.126: 57.9004% ( 279) 00:09:44.015 6251.126 - 6276.332: 59.3164% ( 290) 00:09:44.015 6276.332 - 6301.538: 60.7227% ( 288) 00:09:44.015 6301.538 - 6326.745: 62.1191% ( 286) 00:09:44.015 6326.745 - 6351.951: 63.5254% ( 288) 00:09:44.015 6351.951 - 6377.157: 64.9463% ( 291) 00:09:44.015 6377.157 - 6402.363: 66.3672% ( 291) 00:09:44.015 6402.363 - 6427.569: 67.8076% ( 295) 00:09:44.015 6427.569 - 6452.775: 69.2432% ( 294) 00:09:44.015 6452.775 - 6503.188: 72.1045% ( 586) 00:09:44.015 6503.188 - 6553.600: 74.9512% ( 583) 00:09:44.015 6553.600 - 6604.012: 77.7832% ( 580) 00:09:44.015 6604.012 - 6654.425: 80.6055% ( 578) 00:09:44.015 6654.425 - 6704.837: 83.3105% ( 554) 00:09:44.015 6704.837 - 6755.249: 85.7861% ( 507) 00:09:44.015 6755.249 - 6805.662: 88.1250% ( 479) 00:09:44.015 6805.662 - 6856.074: 90.2832% ( 442) 00:09:44.015 6856.074 - 6906.486: 92.0117% ( 354) 00:09:44.015 6906.486 - 6956.898: 93.0664% ( 216) 00:09:44.015 6956.898 - 7007.311: 93.6279% ( 115) 00:09:44.015 7007.311 - 7057.723: 94.0039% ( 77) 00:09:44.015 7057.723 - 7108.135: 94.2920% ( 59) 00:09:44.015 7108.135 - 7158.548: 94.5215% ( 47) 00:09:44.015 7158.548 - 7208.960: 94.6826% ( 33) 00:09:44.015 7208.960 - 7259.372: 94.8389% ( 32) 00:09:44.015 7259.372 - 7309.785: 94.9805% ( 29) 00:09:44.015 7309.785 - 7360.197: 95.1465% ( 34) 00:09:44.015 7360.197 - 7410.609: 95.2637% ( 24) 00:09:44.015 7410.609 - 7461.022: 95.3711% ( 22) 00:09:44.015 7461.022 - 7511.434: 95.4736% ( 21) 00:09:44.015 7511.434 - 7561.846: 95.5762% ( 21) 00:09:44.015 7561.846 - 7612.258: 95.6445% ( 14) 00:09:44.015 7612.258 - 7662.671: 95.7227% ( 16) 00:09:44.015 7662.671 - 7713.083: 95.7861% ( 13) 00:09:44.015 7713.083 - 7763.495: 95.8496% ( 13) 00:09:44.015 7763.495 - 7813.908: 95.9131% ( 13) 00:09:44.015 7813.908 - 7864.320: 95.9766% ( 13) 00:09:44.015 7864.320 - 7914.732: 96.0352% ( 12) 00:09:44.015 7914.732 - 7965.145: 96.0986% ( 13) 00:09:44.015 7965.145 - 8015.557: 96.1572% ( 12) 00:09:44.015 8015.557 - 8065.969: 96.2207% ( 13) 00:09:44.015 8065.969 - 8116.382: 96.2842% ( 13) 00:09:44.015 8116.382 - 8166.794: 96.3330% ( 10) 00:09:44.015 8166.794 - 8217.206: 96.4014% ( 14) 00:09:44.015 8217.206 - 8267.618: 96.4893% ( 18) 00:09:44.015 8267.618 - 8318.031: 96.5625% ( 15) 00:09:44.015 8318.031 - 8368.443: 96.6455% ( 17) 00:09:44.015 8368.443 - 8418.855: 96.7236% ( 16) 00:09:44.015 8418.855 - 8469.268: 96.8115% ( 18) 00:09:44.015 8469.268 - 8519.680: 96.8750% ( 13) 00:09:44.015 8519.680 - 8570.092: 96.9287% ( 11) 00:09:44.015 8570.092 - 8620.505: 96.9922% ( 13) 00:09:44.015 8620.505 - 8670.917: 97.0557% ( 13) 00:09:44.015 8670.917 - 8721.329: 97.1143% ( 12) 00:09:44.015 8721.329 - 8771.742: 97.1777% ( 13) 00:09:44.015 8771.742 - 8822.154: 97.2412% ( 13) 00:09:44.015 8822.154 - 8872.566: 97.2998% ( 12) 00:09:44.015 8872.566 - 8922.978: 97.3633% ( 13) 00:09:44.015 8922.978 - 8973.391: 97.4219% ( 12) 00:09:44.015 8973.391 - 9023.803: 97.4609% ( 8) 00:09:44.015 9023.803 - 9074.215: 97.5000% ( 8) 00:09:44.015 9074.215 - 9124.628: 97.5439% ( 9) 00:09:44.015 9124.628 - 9175.040: 97.5830% ( 8) 00:09:44.015 9175.040 - 9225.452: 97.6270% ( 9) 00:09:44.015 9225.452 - 9275.865: 97.6709% ( 9) 00:09:44.015 9275.865 - 9326.277: 97.7148% ( 9) 00:09:44.015 9326.277 - 9376.689: 97.7588% ( 9) 00:09:44.015 9376.689 - 9427.102: 97.7979% ( 8) 00:09:44.015 9427.102 - 9477.514: 97.8418% ( 9) 00:09:44.015 9477.514 - 9527.926: 97.8906% ( 10) 00:09:44.015 9527.926 - 9578.338: 97.9443% ( 11) 00:09:44.015 9578.338 - 9628.751: 97.9834% ( 8) 00:09:44.015 9628.751 - 9679.163: 98.0322% ( 10) 00:09:44.015 9679.163 - 9729.575: 98.0762% ( 9) 00:09:44.015 9729.575 - 9779.988: 98.1201% ( 9) 00:09:44.015 9779.988 - 9830.400: 98.1592% ( 8) 00:09:44.015 9830.400 - 9880.812: 98.2080% ( 10) 00:09:44.015 9880.812 - 9931.225: 98.2520% ( 9) 00:09:44.015 9931.225 - 9981.637: 98.2910% ( 8) 00:09:44.015 9981.637 - 10032.049: 98.3154% ( 5) 00:09:44.015 10032.049 - 10082.462: 98.3496% ( 7) 00:09:44.015 10082.462 - 10132.874: 98.3789% ( 6) 00:09:44.015 10132.874 - 10183.286: 98.3984% ( 4) 00:09:44.015 10183.286 - 10233.698: 98.4277% ( 6) 00:09:44.015 10233.698 - 10284.111: 98.4570% ( 6) 00:09:44.015 10284.111 - 10334.523: 98.4814% ( 5) 00:09:44.015 10334.523 - 10384.935: 98.5107% ( 6) 00:09:44.015 10384.935 - 10435.348: 98.5400% ( 6) 00:09:44.016 10435.348 - 10485.760: 98.5742% ( 7) 00:09:44.016 10485.760 - 10536.172: 98.5986% ( 5) 00:09:44.016 10536.172 - 10586.585: 98.6230% ( 5) 00:09:44.016 10586.585 - 10636.997: 98.6523% ( 6) 00:09:44.016 10636.997 - 10687.409: 98.6816% ( 6) 00:09:44.016 10687.409 - 10737.822: 98.7109% ( 6) 00:09:44.016 10737.822 - 10788.234: 98.7402% ( 6) 00:09:44.016 10788.234 - 10838.646: 98.7695% ( 6) 00:09:44.016 10838.646 - 10889.058: 98.7988% ( 6) 00:09:44.016 10889.058 - 10939.471: 98.8281% ( 6) 00:09:44.016 10939.471 - 10989.883: 98.8574% ( 6) 00:09:44.016 10989.883 - 11040.295: 98.8867% ( 6) 00:09:44.016 11040.295 - 11090.708: 98.9111% ( 5) 00:09:44.016 11090.708 - 11141.120: 98.9404% ( 6) 00:09:44.016 11141.120 - 11191.532: 98.9600% ( 4) 00:09:44.016 11191.532 - 11241.945: 98.9697% ( 2) 00:09:44.016 11241.945 - 11292.357: 98.9844% ( 3) 00:09:44.016 11292.357 - 11342.769: 98.9941% ( 2) 00:09:44.016 11342.769 - 11393.182: 99.0039% ( 2) 00:09:44.016 11393.182 - 11443.594: 99.0137% ( 2) 00:09:44.016 11443.594 - 11494.006: 99.0234% ( 2) 00:09:44.016 11494.006 - 11544.418: 99.0332% ( 2) 00:09:44.016 11544.418 - 11594.831: 99.0430% ( 2) 00:09:44.016 11594.831 - 11645.243: 99.0527% ( 2) 00:09:44.016 11645.243 - 11695.655: 99.0625% ( 2) 00:09:44.016 11695.655 - 11746.068: 99.0723% ( 2) 00:09:44.016 11746.068 - 11796.480: 99.0820% ( 2) 00:09:44.016 11796.480 - 11846.892: 99.0918% ( 2) 00:09:44.016 11846.892 - 11897.305: 99.1016% ( 2) 00:09:44.016 11897.305 - 11947.717: 99.1113% ( 2) 00:09:44.016 11947.717 - 11998.129: 99.1211% ( 2) 00:09:44.016 11998.129 - 12048.542: 99.1309% ( 2) 00:09:44.016 12048.542 - 12098.954: 99.1455% ( 3) 00:09:44.016 12098.954 - 12149.366: 99.1504% ( 1) 00:09:44.016 12149.366 - 12199.778: 99.1602% ( 2) 00:09:44.016 12199.778 - 12250.191: 99.1699% ( 2) 00:09:44.016 12250.191 - 12300.603: 99.1797% ( 2) 00:09:44.016 12300.603 - 12351.015: 99.1895% ( 2) 00:09:44.016 12351.015 - 12401.428: 99.2041% ( 3) 00:09:44.016 12401.428 - 12451.840: 99.2139% ( 2) 00:09:44.016 12451.840 - 12502.252: 99.2236% ( 2) 00:09:44.016 12502.252 - 12552.665: 99.2334% ( 2) 00:09:44.016 12552.665 - 12603.077: 99.2432% ( 2) 00:09:44.016 12603.077 - 12653.489: 99.2529% ( 2) 00:09:44.016 12653.489 - 12703.902: 99.2627% ( 2) 00:09:44.016 12703.902 - 12754.314: 99.2725% ( 2) 00:09:44.016 12754.314 - 12804.726: 99.2920% ( 4) 00:09:44.016 12804.726 - 12855.138: 99.3115% ( 4) 00:09:44.016 12855.138 - 12905.551: 99.3311% ( 4) 00:09:44.016 12905.551 - 13006.375: 99.3750% ( 9) 00:09:44.016 13006.375 - 13107.200: 99.4092% ( 7) 00:09:44.016 13107.200 - 13208.025: 99.4482% ( 8) 00:09:44.016 13208.025 - 13308.849: 99.4824% ( 7) 00:09:44.016 13308.849 - 13409.674: 99.5068% ( 5) 00:09:44.016 13409.674 - 13510.498: 99.5264% ( 4) 00:09:44.016 13510.498 - 13611.323: 99.5459% ( 4) 00:09:44.016 13611.323 - 13712.148: 99.5654% ( 4) 00:09:44.016 13712.148 - 13812.972: 99.5898% ( 5) 00:09:44.016 13812.972 - 13913.797: 99.6094% ( 4) 00:09:44.016 13913.797 - 14014.622: 99.6289% ( 4) 00:09:44.016 14014.622 - 14115.446: 99.6533% ( 5) 00:09:44.016 14115.446 - 14216.271: 99.6729% ( 4) 00:09:44.016 14216.271 - 14317.095: 99.6924% ( 4) 00:09:44.016 14317.095 - 14417.920: 99.7119% ( 4) 00:09:44.016 14417.920 - 14518.745: 99.7314% ( 4) 00:09:44.016 14518.745 - 14619.569: 99.7510% ( 4) 00:09:44.016 14619.569 - 14720.394: 99.7754% ( 5) 00:09:44.016 14720.394 - 14821.218: 99.7949% ( 4) 00:09:44.016 14821.218 - 14922.043: 99.8145% ( 4) 00:09:44.016 14922.043 - 15022.868: 99.8340% ( 4) 00:09:44.016 15022.868 - 15123.692: 99.8584% ( 5) 00:09:44.016 15123.692 - 15224.517: 99.8779% ( 4) 00:09:44.016 15224.517 - 15325.342: 99.8975% ( 4) 00:09:44.016 15325.342 - 15426.166: 99.9170% ( 4) 00:09:44.016 15426.166 - 15526.991: 99.9414% ( 5) 00:09:44.016 15526.991 - 15627.815: 99.9609% ( 4) 00:09:44.016 15627.815 - 15728.640: 99.9805% ( 4) 00:09:44.016 15728.640 - 15829.465: 100.0000% ( 4) 00:09:44.016 00:09:44.016 15:51:55 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:45.387 Initializing NVMe Controllers 00:09:45.387 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:45.387 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:45.387 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:45.387 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:45.387 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:45.387 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:45.387 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:45.387 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:45.387 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:45.387 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:45.387 Initialization complete. Launching workers. 00:09:45.387 ======================================================== 00:09:45.387 Latency(us) 00:09:45.387 Device Information : IOPS MiB/s Average min max 00:09:45.387 PCIE (0000:00:09.0) NSID 1 from core 0: 19887.85 233.06 6433.95 4959.46 27239.21 00:09:45.387 PCIE (0000:00:06.0) NSID 1 from core 0: 19887.85 233.06 6428.19 4901.28 26590.54 00:09:45.387 PCIE (0000:00:07.0) NSID 1 from core 0: 19887.85 233.06 6422.18 4905.24 25223.89 00:09:45.387 PCIE (0000:00:08.0) NSID 1 from core 0: 19887.85 233.06 6416.63 4900.51 24085.60 00:09:45.387 PCIE (0000:00:08.0) NSID 2 from core 0: 19887.85 233.06 6410.96 5086.29 23085.13 00:09:45.387 PCIE (0000:00:08.0) NSID 3 from core 0: 20015.34 234.55 6364.49 4955.27 14990.85 00:09:45.387 ======================================================== 00:09:45.387 Total : 119454.60 1399.86 6412.68 4900.51 27239.21 00:09:45.387 00:09:45.387 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:45.387 ================================================================================= 00:09:45.387 1.00000% : 5444.529us 00:09:45.387 10.00000% : 5822.622us 00:09:45.387 25.00000% : 6049.477us 00:09:45.387 50.00000% : 6276.332us 00:09:45.387 75.00000% : 6604.012us 00:09:45.387 90.00000% : 6906.486us 00:09:45.387 95.00000% : 7108.135us 00:09:45.387 98.00000% : 7360.197us 00:09:45.387 99.00000% : 7965.145us 00:09:45.387 99.50000% : 25206.154us 00:09:45.387 99.90000% : 26819.348us 00:09:45.387 99.99000% : 27222.646us 00:09:45.387 99.99900% : 27424.295us 00:09:45.387 99.99990% : 27424.295us 00:09:45.387 99.99999% : 27424.295us 00:09:45.387 00:09:45.387 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:45.387 ================================================================================= 00:09:45.387 1.00000% : 5242.880us 00:09:45.387 10.00000% : 5646.178us 00:09:45.387 25.00000% : 5847.828us 00:09:45.387 50.00000% : 6276.332us 00:09:45.387 75.00000% : 6755.249us 00:09:45.387 90.00000% : 7208.960us 00:09:45.387 95.00000% : 7360.197us 00:09:45.387 98.00000% : 7612.258us 00:09:45.387 99.00000% : 7965.145us 00:09:45.387 99.50000% : 23794.609us 00:09:45.387 99.90000% : 26214.400us 00:09:45.387 99.99000% : 26617.698us 00:09:45.387 99.99900% : 26617.698us 00:09:45.387 99.99990% : 26617.698us 00:09:45.387 99.99999% : 26617.698us 00:09:45.387 00:09:45.387 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:45.387 ================================================================================= 00:09:45.387 1.00000% : 5494.942us 00:09:45.387 10.00000% : 5847.828us 00:09:45.387 25.00000% : 6049.477us 00:09:45.387 50.00000% : 6276.332us 00:09:45.387 75.00000% : 6604.012us 00:09:45.387 90.00000% : 6906.486us 00:09:45.387 95.00000% : 7108.135us 00:09:45.387 98.00000% : 7461.022us 00:09:45.387 99.00000% : 7763.495us 00:09:45.387 99.50000% : 22685.538us 00:09:45.387 99.90000% : 24802.855us 00:09:45.387 99.99000% : 25206.154us 00:09:45.387 99.99900% : 25306.978us 00:09:45.387 99.99990% : 25306.978us 00:09:45.387 99.99999% : 25306.978us 00:09:45.387 00:09:45.387 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:45.387 ================================================================================= 00:09:45.387 1.00000% : 5469.735us 00:09:45.387 10.00000% : 5822.622us 00:09:45.387 25.00000% : 6024.271us 00:09:45.387 50.00000% : 6276.332us 00:09:45.387 75.00000% : 6604.012us 00:09:45.387 90.00000% : 6956.898us 00:09:45.387 95.00000% : 7108.135us 00:09:45.387 98.00000% : 7511.434us 00:09:45.388 99.00000% : 8015.557us 00:09:45.388 99.50000% : 21576.468us 00:09:45.388 99.90000% : 23693.785us 00:09:45.388 99.99000% : 24097.083us 00:09:45.388 99.99900% : 24097.083us 00:09:45.388 99.99990% : 24097.083us 00:09:45.388 99.99999% : 24097.083us 00:09:45.388 00:09:45.388 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:45.388 ================================================================================= 00:09:45.388 1.00000% : 5469.735us 00:09:45.388 10.00000% : 5822.622us 00:09:45.388 25.00000% : 6049.477us 00:09:45.388 50.00000% : 6276.332us 00:09:45.388 75.00000% : 6604.012us 00:09:45.388 90.00000% : 6906.486us 00:09:45.388 95.00000% : 7158.548us 00:09:45.388 98.00000% : 7561.846us 00:09:45.388 99.00000% : 8065.969us 00:09:45.388 99.50000% : 20669.046us 00:09:45.388 99.90000% : 22685.538us 00:09:45.388 99.99000% : 23088.837us 00:09:45.388 99.99900% : 23088.837us 00:09:45.388 99.99990% : 23088.837us 00:09:45.388 99.99999% : 23088.837us 00:09:45.388 00:09:45.388 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:45.388 ================================================================================= 00:09:45.388 1.00000% : 5494.942us 00:09:45.388 10.00000% : 5822.622us 00:09:45.388 25.00000% : 6049.477us 00:09:45.388 50.00000% : 6276.332us 00:09:45.388 75.00000% : 6604.012us 00:09:45.388 90.00000% : 6906.486us 00:09:45.388 95.00000% : 7108.135us 00:09:45.388 98.00000% : 7713.083us 00:09:45.388 99.00000% : 8318.031us 00:09:45.388 99.50000% : 12603.077us 00:09:45.388 99.90000% : 14518.745us 00:09:45.388 99.99000% : 15022.868us 00:09:45.388 99.99900% : 15022.868us 00:09:45.388 99.99990% : 15022.868us 00:09:45.388 99.99999% : 15022.868us 00:09:45.388 00:09:45.388 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:45.388 ============================================================================== 00:09:45.388 Range in us Cumulative IO count 00:09:45.388 4940.406 - 4965.612: 0.0050% ( 1) 00:09:45.388 5041.231 - 5066.437: 0.0100% ( 1) 00:09:45.388 5167.262 - 5192.468: 0.0250% ( 3) 00:09:45.388 5192.468 - 5217.674: 0.0551% ( 6) 00:09:45.388 5217.674 - 5242.880: 0.1052% ( 10) 00:09:45.388 5242.880 - 5268.086: 0.1703% ( 13) 00:09:45.388 5268.086 - 5293.292: 0.2254% ( 11) 00:09:45.388 5293.292 - 5318.498: 0.3155% ( 18) 00:09:45.388 5318.498 - 5343.705: 0.4107% ( 19) 00:09:45.388 5343.705 - 5368.911: 0.5058% ( 19) 00:09:45.388 5368.911 - 5394.117: 0.6410% ( 27) 00:09:45.388 5394.117 - 5419.323: 0.8113% ( 34) 00:09:45.388 5419.323 - 5444.529: 1.0166% ( 41) 00:09:45.388 5444.529 - 5469.735: 1.2370% ( 44) 00:09:45.388 5469.735 - 5494.942: 1.5274% ( 58) 00:09:45.388 5494.942 - 5520.148: 1.7728% ( 49) 00:09:45.388 5520.148 - 5545.354: 2.0733% ( 60) 00:09:45.388 5545.354 - 5570.560: 2.3838% ( 62) 00:09:45.388 5570.560 - 5595.766: 2.7093% ( 65) 00:09:45.388 5595.766 - 5620.972: 3.0699% ( 72) 00:09:45.388 5620.972 - 5646.178: 3.6258% ( 111) 00:09:45.388 5646.178 - 5671.385: 4.2318% ( 121) 00:09:45.388 5671.385 - 5696.591: 4.9629% ( 146) 00:09:45.388 5696.591 - 5721.797: 5.8444% ( 176) 00:09:45.388 5721.797 - 5747.003: 6.8109% ( 193) 00:09:45.388 5747.003 - 5772.209: 8.0879% ( 255) 00:09:45.388 5772.209 - 5797.415: 9.3299% ( 248) 00:09:45.388 5797.415 - 5822.622: 10.8574% ( 305) 00:09:45.388 5822.622 - 5847.828: 12.3297% ( 294) 00:09:45.388 5847.828 - 5873.034: 14.0976% ( 353) 00:09:45.388 5873.034 - 5898.240: 15.8303% ( 346) 00:09:45.388 5898.240 - 5923.446: 17.5280% ( 339) 00:09:45.388 5923.446 - 5948.652: 19.2057% ( 335) 00:09:45.388 5948.652 - 5973.858: 20.9635% ( 351) 00:09:45.388 5973.858 - 5999.065: 22.8115% ( 369) 00:09:45.388 5999.065 - 6024.271: 24.7746% ( 392) 00:09:45.388 6024.271 - 6049.477: 26.7528% ( 395) 00:09:45.388 6049.477 - 6074.683: 28.8962% ( 428) 00:09:45.388 6074.683 - 6099.889: 30.9095% ( 402) 00:09:45.388 6099.889 - 6125.095: 33.6138% ( 540) 00:09:45.388 6125.095 - 6150.302: 36.0927% ( 495) 00:09:45.388 6150.302 - 6175.508: 39.0675% ( 594) 00:09:45.388 6175.508 - 6200.714: 41.7969% ( 545) 00:09:45.388 6200.714 - 6225.920: 44.4161% ( 523) 00:09:45.388 6225.920 - 6251.126: 47.2957% ( 575) 00:09:45.388 6251.126 - 6276.332: 50.8363% ( 707) 00:09:45.388 6276.332 - 6301.538: 54.0114% ( 634) 00:09:45.388 6301.538 - 6326.745: 57.0513% ( 607) 00:09:45.388 6326.745 - 6351.951: 59.6805% ( 525) 00:09:45.388 6351.951 - 6377.157: 61.7538% ( 414) 00:09:45.388 6377.157 - 6402.363: 64.0475% ( 458) 00:09:45.388 6402.363 - 6427.569: 66.0407% ( 398) 00:09:45.388 6427.569 - 6452.775: 67.7484% ( 341) 00:09:45.388 6452.775 - 6503.188: 70.9085% ( 631) 00:09:45.388 6503.188 - 6553.600: 74.1837% ( 654) 00:09:45.388 6553.600 - 6604.012: 76.9281% ( 548) 00:09:45.388 6604.012 - 6654.425: 79.8277% ( 579) 00:09:45.388 6654.425 - 6704.837: 82.2867% ( 491) 00:09:45.388 6704.837 - 6755.249: 84.5403% ( 450) 00:09:45.388 6755.249 - 6805.662: 86.6236% ( 416) 00:09:45.388 6805.662 - 6856.074: 88.5767% ( 390) 00:09:45.388 6856.074 - 6906.486: 90.5248% ( 389) 00:09:45.388 6906.486 - 6956.898: 92.3427% ( 363) 00:09:45.388 6956.898 - 7007.311: 93.8101% ( 293) 00:09:45.388 7007.311 - 7057.723: 94.9669% ( 231) 00:09:45.388 7057.723 - 7108.135: 95.8884% ( 184) 00:09:45.388 7108.135 - 7158.548: 96.6046% ( 143) 00:09:45.388 7158.548 - 7208.960: 97.1054% ( 100) 00:09:45.388 7208.960 - 7259.372: 97.5811% ( 95) 00:09:45.388 7259.372 - 7309.785: 97.8616% ( 56) 00:09:45.388 7309.785 - 7360.197: 98.1020% ( 48) 00:09:45.388 7360.197 - 7410.609: 98.2572% ( 31) 00:09:45.388 7410.609 - 7461.022: 98.3824% ( 25) 00:09:45.388 7461.022 - 7511.434: 98.4926% ( 22) 00:09:45.388 7511.434 - 7561.846: 98.5927% ( 20) 00:09:45.388 7561.846 - 7612.258: 98.6929% ( 20) 00:09:45.388 7612.258 - 7662.671: 98.7780% ( 17) 00:09:45.388 7662.671 - 7713.083: 98.8331% ( 11) 00:09:45.388 7713.083 - 7763.495: 98.8882% ( 11) 00:09:45.388 7763.495 - 7813.908: 98.9183% ( 6) 00:09:45.388 7813.908 - 7864.320: 98.9483% ( 6) 00:09:45.388 7864.320 - 7914.732: 98.9734% ( 5) 00:09:45.388 7914.732 - 7965.145: 99.0034% ( 6) 00:09:45.388 7965.145 - 8015.557: 99.0385% ( 7) 00:09:45.388 8015.557 - 8065.969: 99.0635% ( 5) 00:09:45.388 8065.969 - 8116.382: 99.0935% ( 6) 00:09:45.388 8116.382 - 8166.794: 99.1186% ( 5) 00:09:45.388 8166.794 - 8217.206: 99.1486% ( 6) 00:09:45.388 8217.206 - 8267.618: 99.1687% ( 4) 00:09:45.388 8267.618 - 8318.031: 99.1837% ( 3) 00:09:45.388 8318.031 - 8368.443: 99.1987% ( 3) 00:09:45.388 8368.443 - 8418.855: 99.2188% ( 4) 00:09:45.388 8418.855 - 8469.268: 99.2338% ( 3) 00:09:45.388 8469.268 - 8519.680: 99.2538% ( 4) 00:09:45.388 8519.680 - 8570.092: 99.2688% ( 3) 00:09:45.388 8570.092 - 8620.505: 99.2889% ( 4) 00:09:45.388 8620.505 - 8670.917: 99.3089% ( 4) 00:09:45.388 8670.917 - 8721.329: 99.3289% ( 4) 00:09:45.388 8721.329 - 8771.742: 99.3440% ( 3) 00:09:45.388 8771.742 - 8822.154: 99.3590% ( 3) 00:09:45.388 24903.680 - 25004.505: 99.3840% ( 5) 00:09:45.388 25004.505 - 25105.329: 99.4391% ( 11) 00:09:45.388 25105.329 - 25206.154: 99.5092% ( 14) 00:09:45.388 25206.154 - 25306.978: 99.6194% ( 22) 00:09:45.388 25306.978 - 25407.803: 99.6695% ( 10) 00:09:45.388 25407.803 - 25508.628: 99.6845% ( 3) 00:09:45.388 25508.628 - 25609.452: 99.7045% ( 4) 00:09:45.388 25609.452 - 25710.277: 99.7246% ( 4) 00:09:45.388 25710.277 - 25811.102: 99.7446% ( 4) 00:09:45.388 25811.102 - 26012.751: 99.7796% ( 7) 00:09:45.388 26012.751 - 26214.400: 99.8147% ( 7) 00:09:45.388 26214.400 - 26416.049: 99.8498% ( 7) 00:09:45.388 26416.049 - 26617.698: 99.8898% ( 8) 00:09:45.388 26617.698 - 26819.348: 99.9249% ( 7) 00:09:45.388 26819.348 - 27020.997: 99.9649% ( 8) 00:09:45.388 27020.997 - 27222.646: 99.9950% ( 6) 00:09:45.388 27222.646 - 27424.295: 100.0000% ( 1) 00:09:45.388 00:09:45.388 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:45.388 ============================================================================== 00:09:45.388 Range in us Cumulative IO count 00:09:45.388 4889.994 - 4915.200: 0.0050% ( 1) 00:09:45.388 4915.200 - 4940.406: 0.0851% ( 16) 00:09:45.388 4940.406 - 4965.612: 0.1002% ( 3) 00:09:45.388 4990.818 - 5016.025: 0.1152% ( 3) 00:09:45.388 5016.025 - 5041.231: 0.1302% ( 3) 00:09:45.388 5041.231 - 5066.437: 0.1502% ( 4) 00:09:45.388 5066.437 - 5091.643: 0.1853% ( 7) 00:09:45.388 5091.643 - 5116.849: 0.2053% ( 4) 00:09:45.388 5116.849 - 5142.055: 0.2704% ( 13) 00:09:45.388 5142.055 - 5167.262: 0.3205% ( 10) 00:09:45.388 5167.262 - 5192.468: 0.5058% ( 37) 00:09:45.388 5192.468 - 5217.674: 0.7011% ( 39) 00:09:45.388 5217.674 - 5242.880: 1.0517% ( 70) 00:09:45.388 5242.880 - 5268.086: 1.2019% ( 30) 00:09:45.388 5268.086 - 5293.292: 1.4323% ( 46) 00:09:45.388 5293.292 - 5318.498: 1.8079% ( 75) 00:09:45.388 5318.498 - 5343.705: 2.0833% ( 55) 00:09:45.388 5343.705 - 5368.911: 2.4790% ( 79) 00:09:45.388 5368.911 - 5394.117: 2.9197% ( 88) 00:09:45.388 5394.117 - 5419.323: 3.3203% ( 80) 00:09:45.388 5419.323 - 5444.529: 3.8111% ( 98) 00:09:45.388 5444.529 - 5469.735: 4.5573% ( 149) 00:09:45.388 5469.735 - 5494.942: 5.0982% ( 108) 00:09:45.388 5494.942 - 5520.148: 5.6691% ( 114) 00:09:45.388 5520.148 - 5545.354: 6.3151% ( 129) 00:09:45.388 5545.354 - 5570.560: 7.2766% ( 192) 00:09:45.388 5570.560 - 5595.766: 8.4285% ( 230) 00:09:45.388 5595.766 - 5620.972: 9.4952% ( 213) 00:09:45.388 5620.972 - 5646.178: 10.9625% ( 293) 00:09:45.388 5646.178 - 5671.385: 12.8255% ( 372) 00:09:45.388 5671.385 - 5696.591: 14.9539% ( 425) 00:09:45.389 5696.591 - 5721.797: 17.1374% ( 436) 00:09:45.389 5721.797 - 5747.003: 18.9103% ( 354) 00:09:45.389 5747.003 - 5772.209: 20.7181% ( 361) 00:09:45.389 5772.209 - 5797.415: 22.3858% ( 333) 00:09:45.389 5797.415 - 5822.622: 24.1937% ( 361) 00:09:45.389 5822.622 - 5847.828: 26.2320% ( 407) 00:09:45.389 5847.828 - 5873.034: 28.0699% ( 367) 00:09:45.389 5873.034 - 5898.240: 29.5423% ( 294) 00:09:45.389 5898.240 - 5923.446: 30.9996% ( 291) 00:09:45.389 5923.446 - 5948.652: 32.4720% ( 294) 00:09:45.389 5948.652 - 5973.858: 34.1096% ( 327) 00:09:45.389 5973.858 - 5999.065: 35.6220% ( 302) 00:09:45.389 5999.065 - 6024.271: 36.9341% ( 262) 00:09:45.389 6024.271 - 6049.477: 38.4315% ( 299) 00:09:45.389 6049.477 - 6074.683: 40.1693% ( 347) 00:09:45.389 6074.683 - 6099.889: 41.5465% ( 275) 00:09:45.389 6099.889 - 6125.095: 42.8886% ( 268) 00:09:45.389 6125.095 - 6150.302: 44.3460% ( 291) 00:09:45.389 6150.302 - 6175.508: 45.9034% ( 311) 00:09:45.389 6175.508 - 6200.714: 47.2306% ( 265) 00:09:45.389 6200.714 - 6225.920: 48.4826% ( 250) 00:09:45.389 6225.920 - 6251.126: 49.7796% ( 259) 00:09:45.389 6251.126 - 6276.332: 51.1368% ( 271) 00:09:45.389 6276.332 - 6301.538: 52.4489% ( 262) 00:09:45.389 6301.538 - 6326.745: 53.6809% ( 246) 00:09:45.389 6326.745 - 6351.951: 54.9479% ( 253) 00:09:45.389 6351.951 - 6377.157: 55.9896% ( 208) 00:09:45.389 6377.157 - 6402.363: 57.2216% ( 246) 00:09:45.389 6402.363 - 6427.569: 58.3834% ( 232) 00:09:45.389 6427.569 - 6452.775: 59.6955% ( 262) 00:09:45.389 6452.775 - 6503.188: 62.3047% ( 521) 00:09:45.389 6503.188 - 6553.600: 65.3896% ( 616) 00:09:45.389 6553.600 - 6604.012: 68.2542% ( 572) 00:09:45.389 6604.012 - 6654.425: 71.1088% ( 570) 00:09:45.389 6654.425 - 6704.837: 73.9032% ( 558) 00:09:45.389 6704.837 - 6755.249: 76.5074% ( 520) 00:09:45.389 6755.249 - 6805.662: 78.4906% ( 396) 00:09:45.389 6805.662 - 6856.074: 80.2935% ( 360) 00:09:45.389 6856.074 - 6906.486: 81.9161% ( 324) 00:09:45.389 6906.486 - 6956.898: 83.6589% ( 348) 00:09:45.389 6956.898 - 7007.311: 85.3065% ( 329) 00:09:45.389 7007.311 - 7057.723: 86.7839% ( 295) 00:09:45.389 7057.723 - 7108.135: 88.3013% ( 303) 00:09:45.389 7108.135 - 7158.548: 89.8938% ( 318) 00:09:45.389 7158.548 - 7208.960: 91.5014% ( 321) 00:09:45.389 7208.960 - 7259.372: 93.1340% ( 326) 00:09:45.389 7259.372 - 7309.785: 94.5363% ( 280) 00:09:45.389 7309.785 - 7360.197: 95.5980% ( 212) 00:09:45.389 7360.197 - 7410.609: 96.4193% ( 164) 00:09:45.389 7410.609 - 7461.022: 97.0152% ( 119) 00:09:45.389 7461.022 - 7511.434: 97.4960% ( 96) 00:09:45.389 7511.434 - 7561.846: 97.8365% ( 68) 00:09:45.389 7561.846 - 7612.258: 98.0569% ( 44) 00:09:45.389 7612.258 - 7662.671: 98.2222% ( 33) 00:09:45.389 7662.671 - 7713.083: 98.3724% ( 30) 00:09:45.389 7713.083 - 7763.495: 98.5276% ( 31) 00:09:45.389 7763.495 - 7813.908: 98.6679% ( 28) 00:09:45.389 7813.908 - 7864.320: 98.8181% ( 30) 00:09:45.389 7864.320 - 7914.732: 98.9283% ( 22) 00:09:45.389 7914.732 - 7965.145: 99.0435% ( 23) 00:09:45.389 7965.145 - 8015.557: 99.1136% ( 14) 00:09:45.389 8015.557 - 8065.969: 99.1737% ( 12) 00:09:45.389 8065.969 - 8116.382: 99.2137% ( 8) 00:09:45.389 8116.382 - 8166.794: 99.2438% ( 6) 00:09:45.389 8166.794 - 8217.206: 99.2638% ( 4) 00:09:45.389 8217.206 - 8267.618: 99.2889% ( 5) 00:09:45.389 8267.618 - 8318.031: 99.3089% ( 4) 00:09:45.389 8318.031 - 8368.443: 99.3289% ( 4) 00:09:45.389 8368.443 - 8418.855: 99.3440% ( 3) 00:09:45.389 8418.855 - 8469.268: 99.3540% ( 2) 00:09:45.389 8469.268 - 8519.680: 99.3590% ( 1) 00:09:45.389 22887.188 - 22988.012: 99.3640% ( 1) 00:09:45.389 22988.012 - 23088.837: 99.3840% ( 4) 00:09:45.389 23088.837 - 23189.662: 99.3990% ( 3) 00:09:45.389 23189.662 - 23290.486: 99.4191% ( 4) 00:09:45.389 23290.486 - 23391.311: 99.4391% ( 4) 00:09:45.389 23391.311 - 23492.135: 99.4541% ( 3) 00:09:45.389 23492.135 - 23592.960: 99.4692% ( 3) 00:09:45.389 23592.960 - 23693.785: 99.4842% ( 3) 00:09:45.389 23693.785 - 23794.609: 99.5092% ( 5) 00:09:45.389 23794.609 - 23895.434: 99.5242% ( 3) 00:09:45.389 23895.434 - 23996.258: 99.5443% ( 4) 00:09:45.389 23996.258 - 24097.083: 99.5593% ( 3) 00:09:45.389 24097.083 - 24197.908: 99.5743% ( 3) 00:09:45.389 24197.908 - 24298.732: 99.5893% ( 3) 00:09:45.389 24298.732 - 24399.557: 99.6094% ( 4) 00:09:45.389 24399.557 - 24500.382: 99.6294% ( 4) 00:09:45.389 24500.382 - 24601.206: 99.6444% ( 3) 00:09:45.389 24601.206 - 24702.031: 99.6645% ( 4) 00:09:45.389 24702.031 - 24802.855: 99.6745% ( 2) 00:09:45.389 24802.855 - 24903.680: 99.6995% ( 5) 00:09:45.389 24903.680 - 25004.505: 99.7145% ( 3) 00:09:45.389 25004.505 - 25105.329: 99.7296% ( 3) 00:09:45.389 25105.329 - 25206.154: 99.7496% ( 4) 00:09:45.389 25206.154 - 25306.978: 99.7646% ( 3) 00:09:45.389 25306.978 - 25407.803: 99.7847% ( 4) 00:09:45.389 25407.803 - 25508.628: 99.7997% ( 3) 00:09:45.389 25508.628 - 25609.452: 99.8247% ( 5) 00:09:45.389 25609.452 - 25710.277: 99.8397% ( 3) 00:09:45.389 25710.277 - 25811.102: 99.8548% ( 3) 00:09:45.389 25811.102 - 26012.751: 99.8898% ( 7) 00:09:45.389 26012.751 - 26214.400: 99.9299% ( 8) 00:09:45.389 26214.400 - 26416.049: 99.9700% ( 8) 00:09:45.389 26416.049 - 26617.698: 100.0000% ( 6) 00:09:45.389 00:09:45.389 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:45.389 ============================================================================== 00:09:45.389 Range in us Cumulative IO count 00:09:45.389 4889.994 - 4915.200: 0.0050% ( 1) 00:09:45.389 4965.612 - 4990.818: 0.0100% ( 1) 00:09:45.389 5016.025 - 5041.231: 0.0150% ( 1) 00:09:45.389 5091.643 - 5116.849: 0.0200% ( 1) 00:09:45.389 5116.849 - 5142.055: 0.0250% ( 1) 00:09:45.389 5142.055 - 5167.262: 0.0300% ( 1) 00:09:45.389 5217.674 - 5242.880: 0.0501% ( 4) 00:09:45.389 5242.880 - 5268.086: 0.0751% ( 5) 00:09:45.389 5268.086 - 5293.292: 0.1502% ( 15) 00:09:45.389 5293.292 - 5318.498: 0.2304% ( 16) 00:09:45.389 5318.498 - 5343.705: 0.3205% ( 18) 00:09:45.389 5343.705 - 5368.911: 0.4207% ( 20) 00:09:45.389 5368.911 - 5394.117: 0.5258% ( 21) 00:09:45.389 5394.117 - 5419.323: 0.6460% ( 24) 00:09:45.389 5419.323 - 5444.529: 0.8013% ( 31) 00:09:45.389 5444.529 - 5469.735: 0.9866% ( 37) 00:09:45.389 5469.735 - 5494.942: 1.1769% ( 38) 00:09:45.389 5494.942 - 5520.148: 1.4373% ( 52) 00:09:45.389 5520.148 - 5545.354: 1.6777% ( 48) 00:09:45.389 5545.354 - 5570.560: 1.8980% ( 44) 00:09:45.389 5570.560 - 5595.766: 2.1935% ( 59) 00:09:45.389 5595.766 - 5620.972: 2.5992% ( 81) 00:09:45.389 5620.972 - 5646.178: 3.1100% ( 102) 00:09:45.389 5646.178 - 5671.385: 3.6909% ( 116) 00:09:45.389 5671.385 - 5696.591: 4.5022% ( 162) 00:09:45.389 5696.591 - 5721.797: 5.6140% ( 222) 00:09:45.389 5721.797 - 5747.003: 6.5104% ( 179) 00:09:45.389 5747.003 - 5772.209: 7.5371% ( 205) 00:09:45.389 5772.209 - 5797.415: 8.7190% ( 236) 00:09:45.389 5797.415 - 5822.622: 9.9760% ( 251) 00:09:45.389 5822.622 - 5847.828: 11.4533% ( 295) 00:09:45.389 5847.828 - 5873.034: 13.3013% ( 369) 00:09:45.389 5873.034 - 5898.240: 15.0841% ( 356) 00:09:45.389 5898.240 - 5923.446: 17.2125% ( 425) 00:09:45.389 5923.446 - 5948.652: 19.1556% ( 388) 00:09:45.389 5948.652 - 5973.858: 21.0036% ( 369) 00:09:45.389 5973.858 - 5999.065: 22.9467% ( 388) 00:09:45.389 5999.065 - 6024.271: 24.9048% ( 391) 00:09:45.389 6024.271 - 6049.477: 27.2135% ( 461) 00:09:45.389 6049.477 - 6074.683: 29.2167% ( 400) 00:09:45.389 6074.683 - 6099.889: 31.8059% ( 517) 00:09:45.389 6099.889 - 6125.095: 33.8842% ( 415) 00:09:45.389 6125.095 - 6150.302: 36.1278% ( 448) 00:09:45.389 6150.302 - 6175.508: 39.0525% ( 584) 00:09:45.389 6175.508 - 6200.714: 41.8670% ( 562) 00:09:45.389 6200.714 - 6225.920: 44.1356% ( 453) 00:09:45.389 6225.920 - 6251.126: 46.7198% ( 516) 00:09:45.389 6251.126 - 6276.332: 50.3856% ( 732) 00:09:45.389 6276.332 - 6301.538: 53.4605% ( 614) 00:09:45.389 6301.538 - 6326.745: 56.4103% ( 589) 00:09:45.389 6326.745 - 6351.951: 59.5202% ( 621) 00:09:45.389 6351.951 - 6377.157: 61.3932% ( 374) 00:09:45.389 6377.157 - 6402.363: 64.0825% ( 537) 00:09:45.389 6402.363 - 6427.569: 66.0256% ( 388) 00:09:45.389 6427.569 - 6452.775: 68.1440% ( 423) 00:09:45.389 6452.775 - 6503.188: 71.4994% ( 670) 00:09:45.389 6503.188 - 6553.600: 74.4341% ( 586) 00:09:45.389 6553.600 - 6604.012: 77.1134% ( 535) 00:09:45.389 6604.012 - 6654.425: 79.8778% ( 552) 00:09:45.389 6654.425 - 6704.837: 82.3618% ( 496) 00:09:45.389 6704.837 - 6755.249: 84.4902% ( 425) 00:09:45.389 6755.249 - 6805.662: 86.5585% ( 413) 00:09:45.389 6805.662 - 6856.074: 88.7019% ( 428) 00:09:45.389 6856.074 - 6906.486: 90.6851% ( 396) 00:09:45.389 6906.486 - 6956.898: 92.3878% ( 340) 00:09:45.389 6956.898 - 7007.311: 93.6999% ( 262) 00:09:45.389 7007.311 - 7057.723: 94.7065% ( 201) 00:09:45.389 7057.723 - 7108.135: 95.4878% ( 156) 00:09:45.389 7108.135 - 7158.548: 96.3141% ( 165) 00:09:45.389 7158.548 - 7208.960: 96.7248% ( 82) 00:09:45.389 7208.960 - 7259.372: 97.0202% ( 59) 00:09:45.389 7259.372 - 7309.785: 97.2957% ( 55) 00:09:45.389 7309.785 - 7360.197: 97.5711% ( 55) 00:09:45.389 7360.197 - 7410.609: 97.8516% ( 56) 00:09:45.389 7410.609 - 7461.022: 98.0869% ( 47) 00:09:45.389 7461.022 - 7511.434: 98.2923% ( 41) 00:09:45.389 7511.434 - 7561.846: 98.5176% ( 45) 00:09:45.389 7561.846 - 7612.258: 98.6378% ( 24) 00:09:45.389 7612.258 - 7662.671: 98.7981% ( 32) 00:09:45.390 7662.671 - 7713.083: 98.9583% ( 32) 00:09:45.390 7713.083 - 7763.495: 99.0935% ( 27) 00:09:45.390 7763.495 - 7813.908: 99.1436% ( 10) 00:09:45.390 7813.908 - 7864.320: 99.1887% ( 9) 00:09:45.390 7864.320 - 7914.732: 99.2288% ( 8) 00:09:45.390 7914.732 - 7965.145: 99.2638% ( 7) 00:09:45.390 7965.145 - 8015.557: 99.2989% ( 7) 00:09:45.390 8015.557 - 8065.969: 99.3289% ( 6) 00:09:45.390 8065.969 - 8116.382: 99.3540% ( 5) 00:09:45.390 8116.382 - 8166.794: 99.3590% ( 1) 00:09:45.390 21979.766 - 22080.591: 99.3790% ( 4) 00:09:45.390 22080.591 - 22181.415: 99.3990% ( 4) 00:09:45.390 22181.415 - 22282.240: 99.4191% ( 4) 00:09:45.390 22282.240 - 22383.065: 99.4391% ( 4) 00:09:45.390 22383.065 - 22483.889: 99.4591% ( 4) 00:09:45.390 22483.889 - 22584.714: 99.4792% ( 4) 00:09:45.390 22584.714 - 22685.538: 99.5042% ( 5) 00:09:45.390 22685.538 - 22786.363: 99.5242% ( 4) 00:09:45.390 22786.363 - 22887.188: 99.5443% ( 4) 00:09:45.390 22887.188 - 22988.012: 99.5643% ( 4) 00:09:45.390 22988.012 - 23088.837: 99.5843% ( 4) 00:09:45.390 23088.837 - 23189.662: 99.6044% ( 4) 00:09:45.390 23189.662 - 23290.486: 99.6244% ( 4) 00:09:45.390 23290.486 - 23391.311: 99.6444% ( 4) 00:09:45.390 23391.311 - 23492.135: 99.6645% ( 4) 00:09:45.390 23492.135 - 23592.960: 99.6845% ( 4) 00:09:45.390 23592.960 - 23693.785: 99.6995% ( 3) 00:09:45.390 23693.785 - 23794.609: 99.7196% ( 4) 00:09:45.390 23794.609 - 23895.434: 99.7396% ( 4) 00:09:45.390 23895.434 - 23996.258: 99.7596% ( 4) 00:09:45.390 23996.258 - 24097.083: 99.7796% ( 4) 00:09:45.390 24097.083 - 24197.908: 99.7997% ( 4) 00:09:45.390 24197.908 - 24298.732: 99.8197% ( 4) 00:09:45.390 24298.732 - 24399.557: 99.8397% ( 4) 00:09:45.390 24399.557 - 24500.382: 99.8598% ( 4) 00:09:45.390 24500.382 - 24601.206: 99.8798% ( 4) 00:09:45.390 24601.206 - 24702.031: 99.8998% ( 4) 00:09:45.390 24702.031 - 24802.855: 99.9199% ( 4) 00:09:45.390 24802.855 - 24903.680: 99.9349% ( 3) 00:09:45.390 24903.680 - 25004.505: 99.9549% ( 4) 00:09:45.390 25004.505 - 25105.329: 99.9750% ( 4) 00:09:45.390 25105.329 - 25206.154: 99.9950% ( 4) 00:09:45.390 25206.154 - 25306.978: 100.0000% ( 1) 00:09:45.390 00:09:45.390 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:45.390 ============================================================================== 00:09:45.390 Range in us Cumulative IO count 00:09:45.390 4889.994 - 4915.200: 0.0050% ( 1) 00:09:45.390 4990.818 - 5016.025: 0.0100% ( 1) 00:09:45.390 5016.025 - 5041.231: 0.0150% ( 1) 00:09:45.390 5041.231 - 5066.437: 0.0200% ( 1) 00:09:45.390 5091.643 - 5116.849: 0.0250% ( 1) 00:09:45.390 5167.262 - 5192.468: 0.0300% ( 1) 00:09:45.390 5192.468 - 5217.674: 0.0401% ( 2) 00:09:45.390 5217.674 - 5242.880: 0.0601% ( 4) 00:09:45.390 5242.880 - 5268.086: 0.0901% ( 6) 00:09:45.390 5268.086 - 5293.292: 0.1603% ( 14) 00:09:45.390 5293.292 - 5318.498: 0.2304% ( 14) 00:09:45.390 5318.498 - 5343.705: 0.2905% ( 12) 00:09:45.390 5343.705 - 5368.911: 0.4056% ( 23) 00:09:45.390 5368.911 - 5394.117: 0.5108% ( 21) 00:09:45.390 5394.117 - 5419.323: 0.6560% ( 29) 00:09:45.390 5419.323 - 5444.529: 0.8413% ( 37) 00:09:45.390 5444.529 - 5469.735: 1.0266% ( 37) 00:09:45.390 5469.735 - 5494.942: 1.2119% ( 37) 00:09:45.390 5494.942 - 5520.148: 1.4373% ( 45) 00:09:45.390 5520.148 - 5545.354: 1.6927% ( 51) 00:09:45.390 5545.354 - 5570.560: 1.9832% ( 58) 00:09:45.390 5570.560 - 5595.766: 2.4239% ( 88) 00:09:45.390 5595.766 - 5620.972: 2.8496% ( 85) 00:09:45.390 5620.972 - 5646.178: 3.3454% ( 99) 00:09:45.390 5646.178 - 5671.385: 3.9914% ( 129) 00:09:45.390 5671.385 - 5696.591: 4.9429% ( 190) 00:09:45.390 5696.591 - 5721.797: 6.0948% ( 230) 00:09:45.390 5721.797 - 5747.003: 7.1965% ( 220) 00:09:45.390 5747.003 - 5772.209: 8.4034% ( 241) 00:09:45.390 5772.209 - 5797.415: 9.6905% ( 257) 00:09:45.390 5797.415 - 5822.622: 11.2530% ( 312) 00:09:45.390 5822.622 - 5847.828: 12.8806% ( 325) 00:09:45.390 5847.828 - 5873.034: 14.5783% ( 339) 00:09:45.390 5873.034 - 5898.240: 16.2109% ( 326) 00:09:45.390 5898.240 - 5923.446: 18.0288% ( 363) 00:09:45.390 5923.446 - 5948.652: 19.8267% ( 359) 00:09:45.390 5948.652 - 5973.858: 21.7849% ( 391) 00:09:45.390 5973.858 - 5999.065: 23.6528% ( 373) 00:09:45.390 5999.065 - 6024.271: 25.4708% ( 363) 00:09:45.390 6024.271 - 6049.477: 27.3988% ( 385) 00:09:45.390 6049.477 - 6074.683: 29.5122% ( 422) 00:09:45.390 6074.683 - 6099.889: 31.8209% ( 461) 00:09:45.390 6099.889 - 6125.095: 34.2548% ( 486) 00:09:45.390 6125.095 - 6150.302: 36.3982% ( 428) 00:09:45.390 6150.302 - 6175.508: 39.2077% ( 561) 00:09:45.390 6175.508 - 6200.714: 41.9972% ( 557) 00:09:45.390 6200.714 - 6225.920: 45.3926% ( 678) 00:09:45.390 6225.920 - 6251.126: 48.5026% ( 621) 00:09:45.390 6251.126 - 6276.332: 51.1518% ( 529) 00:09:45.390 6276.332 - 6301.538: 54.0014% ( 569) 00:09:45.390 6301.538 - 6326.745: 56.9211% ( 583) 00:09:45.390 6326.745 - 6351.951: 59.7306% ( 561) 00:09:45.390 6351.951 - 6377.157: 61.8389% ( 421) 00:09:45.390 6377.157 - 6402.363: 63.9173% ( 415) 00:09:45.390 6402.363 - 6427.569: 65.9004% ( 396) 00:09:45.390 6427.569 - 6452.775: 67.5431% ( 328) 00:09:45.390 6452.775 - 6503.188: 70.6881% ( 628) 00:09:45.390 6503.188 - 6553.600: 73.9633% ( 654) 00:09:45.390 6553.600 - 6604.012: 77.0683% ( 620) 00:09:45.390 6604.012 - 6654.425: 79.3870% ( 463) 00:09:45.390 6654.425 - 6704.837: 81.5054% ( 423) 00:09:45.390 6704.837 - 6755.249: 83.5537% ( 409) 00:09:45.390 6755.249 - 6805.662: 85.8223% ( 453) 00:09:45.390 6805.662 - 6856.074: 87.9056% ( 416) 00:09:45.390 6856.074 - 6906.486: 89.7937% ( 377) 00:09:45.390 6906.486 - 6956.898: 91.4864% ( 338) 00:09:45.390 6956.898 - 7007.311: 92.9988% ( 302) 00:09:45.390 7007.311 - 7057.723: 94.4712% ( 294) 00:09:45.390 7057.723 - 7108.135: 95.5028% ( 206) 00:09:45.390 7108.135 - 7158.548: 96.1589% ( 131) 00:09:45.390 7158.548 - 7208.960: 96.6396% ( 96) 00:09:45.390 7208.960 - 7259.372: 97.0202% ( 76) 00:09:45.390 7259.372 - 7309.785: 97.2957% ( 55) 00:09:45.390 7309.785 - 7360.197: 97.5461% ( 50) 00:09:45.390 7360.197 - 7410.609: 97.7163% ( 34) 00:09:45.390 7410.609 - 7461.022: 97.9117% ( 39) 00:09:45.390 7461.022 - 7511.434: 98.1320% ( 44) 00:09:45.390 7511.434 - 7561.846: 98.3023% ( 34) 00:09:45.390 7561.846 - 7612.258: 98.4275% ( 25) 00:09:45.390 7612.258 - 7662.671: 98.5226% ( 19) 00:09:45.390 7662.671 - 7713.083: 98.6178% ( 19) 00:09:45.390 7713.083 - 7763.495: 98.6979% ( 16) 00:09:45.390 7763.495 - 7813.908: 98.7630% ( 13) 00:09:45.390 7813.908 - 7864.320: 98.8281% ( 13) 00:09:45.390 7864.320 - 7914.732: 98.8782% ( 10) 00:09:45.390 7914.732 - 7965.145: 98.9333% ( 11) 00:09:45.390 7965.145 - 8015.557: 99.0635% ( 26) 00:09:45.390 8015.557 - 8065.969: 99.2338% ( 34) 00:09:45.390 8065.969 - 8116.382: 99.2538% ( 4) 00:09:45.390 8116.382 - 8166.794: 99.2638% ( 2) 00:09:45.390 8166.794 - 8217.206: 99.2738% ( 2) 00:09:45.390 8217.206 - 8267.618: 99.2889% ( 3) 00:09:45.390 8267.618 - 8318.031: 99.2989% ( 2) 00:09:45.390 8318.031 - 8368.443: 99.3089% ( 2) 00:09:45.390 8368.443 - 8418.855: 99.3239% ( 3) 00:09:45.390 8418.855 - 8469.268: 99.3339% ( 2) 00:09:45.390 8469.268 - 8519.680: 99.3440% ( 2) 00:09:45.390 8519.680 - 8570.092: 99.3590% ( 3) 00:09:45.390 20769.871 - 20870.695: 99.3690% ( 2) 00:09:45.390 20870.695 - 20971.520: 99.3890% ( 4) 00:09:45.390 20971.520 - 21072.345: 99.4091% ( 4) 00:09:45.390 21072.345 - 21173.169: 99.4291% ( 4) 00:09:45.390 21173.169 - 21273.994: 99.4491% ( 4) 00:09:45.390 21273.994 - 21374.818: 99.4641% ( 3) 00:09:45.390 21374.818 - 21475.643: 99.4842% ( 4) 00:09:45.390 21475.643 - 21576.468: 99.5042% ( 4) 00:09:45.390 21576.468 - 21677.292: 99.5242% ( 4) 00:09:45.390 21677.292 - 21778.117: 99.5443% ( 4) 00:09:45.390 21778.117 - 21878.942: 99.5643% ( 4) 00:09:45.390 21878.942 - 21979.766: 99.5843% ( 4) 00:09:45.390 21979.766 - 22080.591: 99.6044% ( 4) 00:09:45.390 22080.591 - 22181.415: 99.6244% ( 4) 00:09:45.390 22181.415 - 22282.240: 99.6394% ( 3) 00:09:45.390 22282.240 - 22383.065: 99.6595% ( 4) 00:09:45.390 22383.065 - 22483.889: 99.6795% ( 4) 00:09:45.390 22483.889 - 22584.714: 99.6995% ( 4) 00:09:45.390 22584.714 - 22685.538: 99.7196% ( 4) 00:09:45.390 22685.538 - 22786.363: 99.7396% ( 4) 00:09:45.390 22786.363 - 22887.188: 99.7596% ( 4) 00:09:45.390 22887.188 - 22988.012: 99.7796% ( 4) 00:09:45.390 22988.012 - 23088.837: 99.7997% ( 4) 00:09:45.390 23088.837 - 23189.662: 99.8197% ( 4) 00:09:45.390 23189.662 - 23290.486: 99.8397% ( 4) 00:09:45.390 23290.486 - 23391.311: 99.8598% ( 4) 00:09:45.390 23391.311 - 23492.135: 99.8798% ( 4) 00:09:45.390 23492.135 - 23592.960: 99.8998% ( 4) 00:09:45.390 23592.960 - 23693.785: 99.9199% ( 4) 00:09:45.390 23693.785 - 23794.609: 99.9399% ( 4) 00:09:45.390 23794.609 - 23895.434: 99.9599% ( 4) 00:09:45.390 23895.434 - 23996.258: 99.9800% ( 4) 00:09:45.390 23996.258 - 24097.083: 100.0000% ( 4) 00:09:45.390 00:09:45.390 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:45.390 ============================================================================== 00:09:45.390 Range in us Cumulative IO count 00:09:45.390 5066.437 - 5091.643: 0.0050% ( 1) 00:09:45.390 5091.643 - 5116.849: 0.0100% ( 1) 00:09:45.390 5167.262 - 5192.468: 0.0250% ( 3) 00:09:45.390 5192.468 - 5217.674: 0.0300% ( 1) 00:09:45.390 5217.674 - 5242.880: 0.0851% ( 11) 00:09:45.390 5242.880 - 5268.086: 0.1402% ( 11) 00:09:45.390 5268.086 - 5293.292: 0.1953% ( 11) 00:09:45.390 5293.292 - 5318.498: 0.2554% ( 12) 00:09:45.391 5318.498 - 5343.705: 0.3506% ( 19) 00:09:45.391 5343.705 - 5368.911: 0.4407% ( 18) 00:09:45.391 5368.911 - 5394.117: 0.5659% ( 25) 00:09:45.391 5394.117 - 5419.323: 0.7111% ( 29) 00:09:45.391 5419.323 - 5444.529: 0.8714% ( 32) 00:09:45.391 5444.529 - 5469.735: 1.0467% ( 35) 00:09:45.391 5469.735 - 5494.942: 1.2320% ( 37) 00:09:45.391 5494.942 - 5520.148: 1.4523% ( 44) 00:09:45.391 5520.148 - 5545.354: 1.6927% ( 48) 00:09:45.391 5545.354 - 5570.560: 1.9481% ( 51) 00:09:45.391 5570.560 - 5595.766: 2.3037% ( 71) 00:09:45.391 5595.766 - 5620.972: 2.6943% ( 78) 00:09:45.391 5620.972 - 5646.178: 3.1300% ( 87) 00:09:45.391 5646.178 - 5671.385: 3.7510% ( 124) 00:09:45.391 5671.385 - 5696.591: 4.6074% ( 171) 00:09:45.391 5696.591 - 5721.797: 5.5439% ( 187) 00:09:45.391 5721.797 - 5747.003: 6.5304% ( 197) 00:09:45.391 5747.003 - 5772.209: 7.6973% ( 233) 00:09:45.391 5772.209 - 5797.415: 9.0244% ( 265) 00:09:45.391 5797.415 - 5822.622: 10.5919% ( 313) 00:09:45.391 5822.622 - 5847.828: 12.2246% ( 326) 00:09:45.391 5847.828 - 5873.034: 13.8822% ( 331) 00:09:45.391 5873.034 - 5898.240: 15.4848% ( 320) 00:09:45.391 5898.240 - 5923.446: 17.2426% ( 351) 00:09:45.391 5923.446 - 5948.652: 18.9854% ( 348) 00:09:45.391 5948.652 - 5973.858: 20.8934% ( 381) 00:09:45.391 5973.858 - 5999.065: 22.9367% ( 408) 00:09:45.391 5999.065 - 6024.271: 24.8998% ( 392) 00:09:45.391 6024.271 - 6049.477: 26.9081% ( 401) 00:09:45.391 6049.477 - 6074.683: 29.0315% ( 424) 00:09:45.391 6074.683 - 6099.889: 31.4253% ( 478) 00:09:45.391 6099.889 - 6125.095: 34.0044% ( 515) 00:09:45.391 6125.095 - 6150.302: 36.8440% ( 567) 00:09:45.391 6150.302 - 6175.508: 39.4181% ( 514) 00:09:45.391 6175.508 - 6200.714: 42.7684% ( 669) 00:09:45.391 6200.714 - 6225.920: 45.8333% ( 612) 00:09:45.391 6225.920 - 6251.126: 48.9433% ( 621) 00:09:45.391 6251.126 - 6276.332: 51.8029% ( 571) 00:09:45.391 6276.332 - 6301.538: 54.7626% ( 591) 00:09:45.391 6301.538 - 6326.745: 57.9327% ( 633) 00:09:45.391 6326.745 - 6351.951: 60.2564% ( 464) 00:09:45.391 6351.951 - 6377.157: 62.7204% ( 492) 00:09:45.391 6377.157 - 6402.363: 65.1042% ( 476) 00:09:45.391 6402.363 - 6427.569: 67.0373% ( 386) 00:09:45.391 6427.569 - 6452.775: 68.7350% ( 339) 00:09:45.391 6452.775 - 6503.188: 71.5745% ( 567) 00:09:45.391 6503.188 - 6553.600: 74.6695% ( 618) 00:09:45.391 6553.600 - 6604.012: 77.6092% ( 587) 00:09:45.391 6604.012 - 6654.425: 80.0481% ( 487) 00:09:45.391 6654.425 - 6704.837: 82.3568% ( 461) 00:09:45.391 6704.837 - 6755.249: 84.4151% ( 411) 00:09:45.391 6755.249 - 6805.662: 86.4533% ( 407) 00:09:45.391 6805.662 - 6856.074: 88.3464% ( 378) 00:09:45.391 6856.074 - 6906.486: 90.0942% ( 349) 00:09:45.391 6906.486 - 6956.898: 91.7368% ( 328) 00:09:45.391 6956.898 - 7007.311: 93.0188% ( 256) 00:09:45.391 7007.311 - 7057.723: 94.0204% ( 200) 00:09:45.391 7057.723 - 7108.135: 94.9619% ( 188) 00:09:45.391 7108.135 - 7158.548: 95.6380% ( 135) 00:09:45.391 7158.548 - 7208.960: 96.1538% ( 103) 00:09:45.391 7208.960 - 7259.372: 96.5946% ( 88) 00:09:45.391 7259.372 - 7309.785: 96.9551% ( 72) 00:09:45.391 7309.785 - 7360.197: 97.3608% ( 81) 00:09:45.391 7360.197 - 7410.609: 97.6162% ( 51) 00:09:45.391 7410.609 - 7461.022: 97.7965% ( 36) 00:09:45.391 7461.022 - 7511.434: 97.9467% ( 30) 00:09:45.391 7511.434 - 7561.846: 98.0569% ( 22) 00:09:45.391 7561.846 - 7612.258: 98.1370% ( 16) 00:09:45.391 7612.258 - 7662.671: 98.2272% ( 18) 00:09:45.391 7662.671 - 7713.083: 98.3173% ( 18) 00:09:45.391 7713.083 - 7763.495: 98.3974% ( 16) 00:09:45.391 7763.495 - 7813.908: 98.4876% ( 18) 00:09:45.391 7813.908 - 7864.320: 98.5827% ( 19) 00:09:45.391 7864.320 - 7914.732: 98.6829% ( 20) 00:09:45.391 7914.732 - 7965.145: 98.8031% ( 24) 00:09:45.391 7965.145 - 8015.557: 98.9283% ( 25) 00:09:45.391 8015.557 - 8065.969: 99.0385% ( 22) 00:09:45.391 8065.969 - 8116.382: 99.1136% ( 15) 00:09:45.391 8116.382 - 8166.794: 99.1386% ( 5) 00:09:45.391 8166.794 - 8217.206: 99.1587% ( 4) 00:09:45.391 8217.206 - 8267.618: 99.1837% ( 5) 00:09:45.391 8267.618 - 8318.031: 99.2037% ( 4) 00:09:45.391 8318.031 - 8368.443: 99.2338% ( 6) 00:09:45.391 8368.443 - 8418.855: 99.2538% ( 4) 00:09:45.391 8418.855 - 8469.268: 99.2788% ( 5) 00:09:45.391 8469.268 - 8519.680: 99.3039% ( 5) 00:09:45.391 8519.680 - 8570.092: 99.3239% ( 4) 00:09:45.391 8570.092 - 8620.505: 99.3490% ( 5) 00:09:45.391 8620.505 - 8670.917: 99.3590% ( 2) 00:09:45.391 19862.449 - 19963.274: 99.3740% ( 3) 00:09:45.391 19963.274 - 20064.098: 99.3940% ( 4) 00:09:45.391 20064.098 - 20164.923: 99.4141% ( 4) 00:09:45.391 20164.923 - 20265.748: 99.4391% ( 5) 00:09:45.391 20265.748 - 20366.572: 99.4591% ( 4) 00:09:45.391 20366.572 - 20467.397: 99.4792% ( 4) 00:09:45.391 20467.397 - 20568.222: 99.4942% ( 3) 00:09:45.391 20568.222 - 20669.046: 99.5142% ( 4) 00:09:45.391 20669.046 - 20769.871: 99.5343% ( 4) 00:09:45.391 20769.871 - 20870.695: 99.5543% ( 4) 00:09:45.391 20870.695 - 20971.520: 99.5743% ( 4) 00:09:45.391 20971.520 - 21072.345: 99.5994% ( 5) 00:09:45.391 21072.345 - 21173.169: 99.6194% ( 4) 00:09:45.391 21173.169 - 21273.994: 99.6394% ( 4) 00:09:45.391 21273.994 - 21374.818: 99.6595% ( 4) 00:09:45.391 21374.818 - 21475.643: 99.6795% ( 4) 00:09:45.391 21475.643 - 21576.468: 99.6995% ( 4) 00:09:45.391 21576.468 - 21677.292: 99.7196% ( 4) 00:09:45.391 21677.292 - 21778.117: 99.7346% ( 3) 00:09:45.391 21778.117 - 21878.942: 99.7546% ( 4) 00:09:45.391 21878.942 - 21979.766: 99.7746% ( 4) 00:09:45.391 21979.766 - 22080.591: 99.7947% ( 4) 00:09:45.391 22080.591 - 22181.415: 99.8147% ( 4) 00:09:45.391 22181.415 - 22282.240: 99.8347% ( 4) 00:09:45.391 22282.240 - 22383.065: 99.8548% ( 4) 00:09:45.391 22383.065 - 22483.889: 99.8748% ( 4) 00:09:45.391 22483.889 - 22584.714: 99.8948% ( 4) 00:09:45.391 22584.714 - 22685.538: 99.9149% ( 4) 00:09:45.391 22685.538 - 22786.363: 99.9349% ( 4) 00:09:45.391 22786.363 - 22887.188: 99.9549% ( 4) 00:09:45.391 22887.188 - 22988.012: 99.9750% ( 4) 00:09:45.391 22988.012 - 23088.837: 100.0000% ( 5) 00:09:45.391 00:09:45.391 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:45.391 ============================================================================== 00:09:45.391 Range in us Cumulative IO count 00:09:45.391 4940.406 - 4965.612: 0.0050% ( 1) 00:09:45.391 5016.025 - 5041.231: 0.0100% ( 1) 00:09:45.391 5041.231 - 5066.437: 0.0199% ( 2) 00:09:45.391 5091.643 - 5116.849: 0.0249% ( 1) 00:09:45.391 5116.849 - 5142.055: 0.0299% ( 1) 00:09:45.391 5167.262 - 5192.468: 0.0348% ( 1) 00:09:45.391 5192.468 - 5217.674: 0.0448% ( 2) 00:09:45.391 5217.674 - 5242.880: 0.0697% ( 5) 00:09:45.391 5242.880 - 5268.086: 0.1145% ( 9) 00:09:45.391 5268.086 - 5293.292: 0.1642% ( 10) 00:09:45.391 5293.292 - 5318.498: 0.2239% ( 12) 00:09:45.391 5318.498 - 5343.705: 0.2886% ( 13) 00:09:45.391 5343.705 - 5368.911: 0.3732% ( 17) 00:09:45.391 5368.911 - 5394.117: 0.4926% ( 24) 00:09:45.391 5394.117 - 5419.323: 0.5922% ( 20) 00:09:45.391 5419.323 - 5444.529: 0.7663% ( 35) 00:09:45.391 5444.529 - 5469.735: 0.9853% ( 44) 00:09:45.391 5469.735 - 5494.942: 1.1943% ( 42) 00:09:45.391 5494.942 - 5520.148: 1.4729% ( 56) 00:09:45.391 5520.148 - 5545.354: 1.7367% ( 53) 00:09:45.391 5545.354 - 5570.560: 2.0253% ( 58) 00:09:45.391 5570.560 - 5595.766: 2.3487% ( 65) 00:09:45.391 5595.766 - 5620.972: 2.7120% ( 73) 00:09:45.391 5620.972 - 5646.178: 3.2693% ( 112) 00:09:45.391 5646.178 - 5671.385: 3.9112% ( 129) 00:09:45.391 5671.385 - 5696.591: 4.7024% ( 159) 00:09:45.391 5696.591 - 5721.797: 5.4986% ( 160) 00:09:45.391 5721.797 - 5747.003: 6.5486% ( 211) 00:09:45.391 5747.003 - 5772.209: 7.8274% ( 257) 00:09:45.391 5772.209 - 5797.415: 9.1262% ( 261) 00:09:45.391 5797.415 - 5822.622: 10.6290% ( 302) 00:09:45.391 5822.622 - 5847.828: 12.1616% ( 308) 00:09:45.391 5847.828 - 5873.034: 14.0177% ( 373) 00:09:45.391 5873.034 - 5898.240: 15.5155% ( 301) 00:09:45.391 5898.240 - 5923.446: 17.3219% ( 363) 00:09:45.391 5923.446 - 5948.652: 19.2078% ( 379) 00:09:45.391 5948.652 - 5973.858: 21.0589% ( 372) 00:09:45.391 5973.858 - 5999.065: 22.9598% ( 382) 00:09:45.391 5999.065 - 6024.271: 24.8159% ( 373) 00:09:45.391 6024.271 - 6049.477: 26.9258% ( 424) 00:09:45.391 6049.477 - 6074.683: 28.9311% ( 403) 00:09:45.391 6074.683 - 6099.889: 31.4590% ( 508) 00:09:45.391 6099.889 - 6125.095: 33.7779% ( 466) 00:09:45.391 6125.095 - 6150.302: 36.5943% ( 566) 00:09:45.391 6150.302 - 6175.508: 39.2367% ( 531) 00:09:45.391 6175.508 - 6200.714: 41.9387% ( 543) 00:09:45.391 6200.714 - 6225.920: 44.6407% ( 543) 00:09:45.391 6225.920 - 6251.126: 47.8453% ( 644) 00:09:45.391 6251.126 - 6276.332: 51.2739% ( 689) 00:09:45.391 6276.332 - 6301.538: 54.3242% ( 613) 00:09:45.391 6301.538 - 6326.745: 57.5587% ( 650) 00:09:45.391 6326.745 - 6351.951: 60.0070% ( 492) 00:09:45.391 6351.951 - 6377.157: 62.3955% ( 480) 00:09:45.391 6377.157 - 6402.363: 64.4805% ( 419) 00:09:45.391 6402.363 - 6427.569: 66.5953% ( 425) 00:09:45.391 6427.569 - 6452.775: 68.3021% ( 343) 00:09:45.391 6452.775 - 6503.188: 71.8203% ( 707) 00:09:45.391 6503.188 - 6553.600: 74.3780% ( 514) 00:09:45.391 6553.600 - 6604.012: 76.9009% ( 507) 00:09:45.391 6604.012 - 6654.425: 79.6278% ( 548) 00:09:45.391 6654.425 - 6704.837: 82.0661% ( 490) 00:09:45.391 6704.837 - 6755.249: 84.2158% ( 432) 00:09:45.391 6755.249 - 6805.662: 86.2709% ( 413) 00:09:45.391 6805.662 - 6856.074: 88.2464% ( 397) 00:09:45.391 6856.074 - 6906.486: 90.1224% ( 377) 00:09:45.392 6906.486 - 6956.898: 91.8242% ( 342) 00:09:45.392 6956.898 - 7007.311: 93.1777% ( 272) 00:09:45.392 7007.311 - 7057.723: 94.3173% ( 229) 00:09:45.392 7057.723 - 7108.135: 95.1234% ( 162) 00:09:45.392 7108.135 - 7158.548: 95.7703% ( 130) 00:09:45.392 7158.548 - 7208.960: 96.2878% ( 104) 00:09:45.392 7208.960 - 7259.372: 96.6312% ( 69) 00:09:45.392 7259.372 - 7309.785: 96.9198% ( 58) 00:09:45.392 7309.785 - 7360.197: 97.1487% ( 46) 00:09:45.392 7360.197 - 7410.609: 97.2631% ( 23) 00:09:45.392 7410.609 - 7461.022: 97.3676% ( 21) 00:09:45.392 7461.022 - 7511.434: 97.4721% ( 21) 00:09:45.392 7511.434 - 7561.846: 97.5766% ( 21) 00:09:45.392 7561.846 - 7612.258: 97.8652% ( 58) 00:09:45.392 7612.258 - 7662.671: 97.9598% ( 19) 00:09:45.392 7662.671 - 7713.083: 98.0295% ( 14) 00:09:45.392 7713.083 - 7763.495: 98.0991% ( 14) 00:09:45.392 7763.495 - 7813.908: 98.1688% ( 14) 00:09:45.392 7813.908 - 7864.320: 98.2335% ( 13) 00:09:45.392 7864.320 - 7914.732: 98.3031% ( 14) 00:09:45.392 7914.732 - 7965.145: 98.4226% ( 24) 00:09:45.392 7965.145 - 8015.557: 98.7410% ( 64) 00:09:45.392 8015.557 - 8065.969: 98.8057% ( 13) 00:09:45.392 8065.969 - 8116.382: 98.8505% ( 9) 00:09:45.392 8116.382 - 8166.794: 98.8903% ( 8) 00:09:45.392 8166.794 - 8217.206: 98.9202% ( 6) 00:09:45.392 8217.206 - 8267.618: 98.9699% ( 10) 00:09:45.392 8267.618 - 8318.031: 99.0496% ( 16) 00:09:45.392 8318.031 - 8368.443: 99.1541% ( 21) 00:09:45.392 8368.443 - 8418.855: 99.2138% ( 12) 00:09:45.392 8418.855 - 8469.268: 99.2337% ( 4) 00:09:45.392 8469.268 - 8519.680: 99.2586% ( 5) 00:09:45.392 8519.680 - 8570.092: 99.2735% ( 3) 00:09:45.392 8570.092 - 8620.505: 99.2834% ( 2) 00:09:45.392 8620.505 - 8670.917: 99.2984% ( 3) 00:09:45.392 8670.917 - 8721.329: 99.3083% ( 2) 00:09:45.392 8721.329 - 8771.742: 99.3232% ( 3) 00:09:45.392 8771.742 - 8822.154: 99.3332% ( 2) 00:09:45.392 8822.154 - 8872.566: 99.3432% ( 2) 00:09:45.392 8872.566 - 8922.978: 99.3531% ( 2) 00:09:45.392 8922.978 - 8973.391: 99.3581% ( 1) 00:09:45.392 9275.865 - 9326.277: 99.3631% ( 1) 00:09:45.392 11897.305 - 11947.717: 99.3730% ( 2) 00:09:45.392 11947.717 - 11998.129: 99.3830% ( 2) 00:09:45.392 11998.129 - 12048.542: 99.3929% ( 2) 00:09:45.392 12048.542 - 12098.954: 99.4029% ( 2) 00:09:45.392 12098.954 - 12149.366: 99.4128% ( 2) 00:09:45.392 12149.366 - 12199.778: 99.4228% ( 2) 00:09:45.392 12199.778 - 12250.191: 99.4327% ( 2) 00:09:45.392 12250.191 - 12300.603: 99.4477% ( 3) 00:09:45.392 12300.603 - 12351.015: 99.4576% ( 2) 00:09:45.392 12351.015 - 12401.428: 99.4676% ( 2) 00:09:45.392 12401.428 - 12451.840: 99.4775% ( 2) 00:09:45.392 12451.840 - 12502.252: 99.4875% ( 2) 00:09:45.392 12502.252 - 12552.665: 99.4974% ( 2) 00:09:45.392 12552.665 - 12603.077: 99.5074% ( 2) 00:09:45.392 12603.077 - 12653.489: 99.5173% ( 2) 00:09:45.392 12653.489 - 12703.902: 99.5273% ( 2) 00:09:45.392 12703.902 - 12754.314: 99.5372% ( 2) 00:09:45.392 12754.314 - 12804.726: 99.5521% ( 3) 00:09:45.392 12804.726 - 12855.138: 99.5621% ( 2) 00:09:45.392 12855.138 - 12905.551: 99.5721% ( 2) 00:09:45.392 12905.551 - 13006.375: 99.5920% ( 4) 00:09:45.392 13006.375 - 13107.200: 99.6119% ( 4) 00:09:45.392 13107.200 - 13208.025: 99.6318% ( 4) 00:09:45.392 13208.025 - 13308.849: 99.6517% ( 4) 00:09:45.392 13308.849 - 13409.674: 99.6716% ( 4) 00:09:45.392 13409.674 - 13510.498: 99.6915% ( 4) 00:09:45.392 13510.498 - 13611.323: 99.7164% ( 5) 00:09:45.392 13611.323 - 13712.148: 99.7363% ( 4) 00:09:45.392 13712.148 - 13812.972: 99.7562% ( 4) 00:09:45.392 13812.972 - 13913.797: 99.7761% ( 4) 00:09:45.392 13913.797 - 14014.622: 99.7960% ( 4) 00:09:45.392 14014.622 - 14115.446: 99.8159% ( 4) 00:09:45.392 14115.446 - 14216.271: 99.8358% ( 4) 00:09:45.392 14216.271 - 14317.095: 99.8607% ( 5) 00:09:45.392 14317.095 - 14417.920: 99.8806% ( 4) 00:09:45.392 14417.920 - 14518.745: 99.9005% ( 4) 00:09:45.392 14518.745 - 14619.569: 99.9204% ( 4) 00:09:45.392 14619.569 - 14720.394: 99.9403% ( 4) 00:09:45.392 14720.394 - 14821.218: 99.9602% ( 4) 00:09:45.392 14821.218 - 14922.043: 99.9851% ( 5) 00:09:45.392 14922.043 - 15022.868: 100.0000% ( 3) 00:09:45.392 00:09:45.392 15:51:56 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:45.392 00:09:45.392 real 0m2.614s 00:09:45.392 user 0m2.304s 00:09:45.392 sys 0m0.200s 00:09:45.392 15:51:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:45.392 15:51:56 -- common/autotest_common.sh@10 -- # set +x 00:09:45.392 ************************************ 00:09:45.392 END TEST nvme_perf 00:09:45.392 ************************************ 00:09:45.392 15:51:56 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:45.392 15:51:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:45.392 15:51:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.392 15:51:56 -- common/autotest_common.sh@10 -- # set +x 00:09:45.392 ************************************ 00:09:45.392 START TEST nvme_hello_world 00:09:45.392 ************************************ 00:09:45.392 15:51:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:45.392 Initializing NVMe Controllers 00:09:45.392 Attached to 0000:00:09.0 00:09:45.392 Namespace ID: 1 size: 1GB 00:09:45.392 Attached to 0000:00:06.0 00:09:45.392 Namespace ID: 1 size: 6GB 00:09:45.392 Attached to 0000:00:07.0 00:09:45.392 Namespace ID: 1 size: 5GB 00:09:45.392 Attached to 0000:00:08.0 00:09:45.392 Namespace ID: 1 size: 4GB 00:09:45.392 Namespace ID: 2 size: 4GB 00:09:45.392 Namespace ID: 3 size: 4GB 00:09:45.392 Initialization complete. 00:09:45.392 INFO: using host memory buffer for IO 00:09:45.392 Hello world! 00:09:45.392 INFO: using host memory buffer for IO 00:09:45.392 Hello world! 00:09:45.392 INFO: using host memory buffer for IO 00:09:45.392 Hello world! 00:09:45.392 INFO: using host memory buffer for IO 00:09:45.392 Hello world! 00:09:45.392 INFO: using host memory buffer for IO 00:09:45.392 Hello world! 00:09:45.392 INFO: using host memory buffer for IO 00:09:45.392 Hello world! 00:09:45.392 00:09:45.392 real 0m0.256s 00:09:45.392 user 0m0.115s 00:09:45.392 sys 0m0.099s 00:09:45.392 15:51:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:45.392 ************************************ 00:09:45.392 END TEST nvme_hello_world 00:09:45.392 ************************************ 00:09:45.392 15:51:56 -- common/autotest_common.sh@10 -- # set +x 00:09:45.650 15:51:56 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:45.650 15:51:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:45.650 15:51:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.650 15:51:56 -- common/autotest_common.sh@10 -- # set +x 00:09:45.650 ************************************ 00:09:45.650 START TEST nvme_sgl 00:09:45.650 ************************************ 00:09:45.650 15:51:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:45.650 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:45.650 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:45.650 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:45.650 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:45.650 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:45.650 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:45.650 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:45.650 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:45.650 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:45.650 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:45.650 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:45.650 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:45.650 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:45.650 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:45.650 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:45.650 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:45.650 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:45.650 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:45.650 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:45.650 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:45.909 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:45.909 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:45.909 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:45.909 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:45.909 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:45.909 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:45.909 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:45.909 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:45.909 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:45.909 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:45.909 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:45.909 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:45.909 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:45.909 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:45.909 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:45.909 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:45.909 NVMe Readv/Writev Request test 00:09:45.909 Attached to 0000:00:09.0 00:09:45.909 Attached to 0000:00:06.0 00:09:45.909 Attached to 0000:00:07.0 00:09:45.909 Attached to 0000:00:08.0 00:09:45.909 0000:00:06.0: build_io_request_2 test passed 00:09:45.909 0000:00:06.0: build_io_request_4 test passed 00:09:45.909 0000:00:06.0: build_io_request_5 test passed 00:09:45.909 0000:00:06.0: build_io_request_6 test passed 00:09:45.909 0000:00:06.0: build_io_request_7 test passed 00:09:45.909 0000:00:06.0: build_io_request_10 test passed 00:09:45.909 0000:00:07.0: build_io_request_2 test passed 00:09:45.909 0000:00:07.0: build_io_request_4 test passed 00:09:45.909 0000:00:07.0: build_io_request_5 test passed 00:09:45.909 0000:00:07.0: build_io_request_6 test passed 00:09:45.909 0000:00:07.0: build_io_request_7 test passed 00:09:45.909 0000:00:07.0: build_io_request_10 test passed 00:09:45.909 Cleaning up... 00:09:45.909 00:09:45.909 real 0m0.373s 00:09:45.909 user 0m0.236s 00:09:45.909 sys 0m0.088s 00:09:45.909 15:51:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:45.909 15:51:57 -- common/autotest_common.sh@10 -- # set +x 00:09:45.909 ************************************ 00:09:45.909 END TEST nvme_sgl 00:09:45.909 ************************************ 00:09:45.909 15:51:57 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:45.909 15:51:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:45.909 15:51:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.909 15:51:57 -- common/autotest_common.sh@10 -- # set +x 00:09:45.909 ************************************ 00:09:45.909 START TEST nvme_e2edp 00:09:45.909 ************************************ 00:09:45.909 15:51:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:46.167 NVMe Write/Read with End-to-End data protection test 00:09:46.167 Attached to 0000:00:09.0 00:09:46.167 Attached to 0000:00:06.0 00:09:46.167 Attached to 0000:00:07.0 00:09:46.167 Attached to 0000:00:08.0 00:09:46.167 Cleaning up... 00:09:46.167 00:09:46.167 real 0m0.199s 00:09:46.167 user 0m0.069s 00:09:46.167 sys 0m0.079s 00:09:46.167 ************************************ 00:09:46.167 END TEST nvme_e2edp 00:09:46.167 ************************************ 00:09:46.167 15:51:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:46.167 15:51:57 -- common/autotest_common.sh@10 -- # set +x 00:09:46.167 15:51:57 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:46.167 15:51:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:46.167 15:51:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.167 15:51:57 -- common/autotest_common.sh@10 -- # set +x 00:09:46.167 ************************************ 00:09:46.167 START TEST nvme_reserve 00:09:46.167 ************************************ 00:09:46.167 15:51:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:46.425 ===================================================== 00:09:46.425 NVMe Controller at PCI bus 0, device 9, function 0 00:09:46.425 ===================================================== 00:09:46.425 Reservations: Not Supported 00:09:46.425 ===================================================== 00:09:46.425 NVMe Controller at PCI bus 0, device 6, function 0 00:09:46.425 ===================================================== 00:09:46.425 Reservations: Not Supported 00:09:46.425 ===================================================== 00:09:46.425 NVMe Controller at PCI bus 0, device 7, function 0 00:09:46.425 ===================================================== 00:09:46.425 Reservations: Not Supported 00:09:46.425 ===================================================== 00:09:46.425 NVMe Controller at PCI bus 0, device 8, function 0 00:09:46.425 ===================================================== 00:09:46.425 Reservations: Not Supported 00:09:46.425 Reservation test passed 00:09:46.425 00:09:46.425 real 0m0.192s 00:09:46.425 user 0m0.056s 00:09:46.425 sys 0m0.092s 00:09:46.426 ************************************ 00:09:46.426 END TEST nvme_reserve 00:09:46.426 ************************************ 00:09:46.426 15:51:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:46.426 15:51:57 -- common/autotest_common.sh@10 -- # set +x 00:09:46.426 15:51:57 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:46.426 15:51:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:46.426 15:51:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.426 15:51:57 -- common/autotest_common.sh@10 -- # set +x 00:09:46.426 ************************************ 00:09:46.426 START TEST nvme_err_injection 00:09:46.426 ************************************ 00:09:46.426 15:51:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:46.684 NVMe Error Injection test 00:09:46.684 Attached to 0000:00:09.0 00:09:46.684 Attached to 0000:00:06.0 00:09:46.684 Attached to 0000:00:07.0 00:09:46.684 Attached to 0000:00:08.0 00:09:46.684 0000:00:09.0: get features failed as expected 00:09:46.684 0000:00:06.0: get features failed as expected 00:09:46.684 0000:00:07.0: get features failed as expected 00:09:46.684 0000:00:08.0: get features failed as expected 00:09:46.684 0000:00:09.0: get features successfully as expected 00:09:46.684 0000:00:06.0: get features successfully as expected 00:09:46.684 0000:00:07.0: get features successfully as expected 00:09:46.684 0000:00:08.0: get features successfully as expected 00:09:46.684 0000:00:09.0: read failed as expected 00:09:46.684 0000:00:06.0: read failed as expected 00:09:46.684 0000:00:07.0: read failed as expected 00:09:46.684 0000:00:08.0: read failed as expected 00:09:46.684 0000:00:09.0: read successfully as expected 00:09:46.684 0000:00:06.0: read successfully as expected 00:09:46.684 0000:00:07.0: read successfully as expected 00:09:46.684 0000:00:08.0: read successfully as expected 00:09:46.684 Cleaning up... 00:09:46.684 00:09:46.684 real 0m0.256s 00:09:46.684 user 0m0.106s 00:09:46.684 sys 0m0.101s 00:09:46.684 ************************************ 00:09:46.684 END TEST nvme_err_injection 00:09:46.684 ************************************ 00:09:46.684 15:51:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:46.684 15:51:57 -- common/autotest_common.sh@10 -- # set +x 00:09:46.684 15:51:57 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:46.684 15:51:57 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:46.684 15:51:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.684 15:51:57 -- common/autotest_common.sh@10 -- # set +x 00:09:46.684 ************************************ 00:09:46.684 START TEST nvme_overhead 00:09:46.684 ************************************ 00:09:46.684 15:51:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:48.058 Initializing NVMe Controllers 00:09:48.058 Attached to 0000:00:09.0 00:09:48.058 Attached to 0000:00:06.0 00:09:48.058 Attached to 0000:00:07.0 00:09:48.058 Attached to 0000:00:08.0 00:09:48.058 Initialization complete. Launching workers. 00:09:48.058 submit (in ns) avg, min, max = 11289.0, 9998.5, 89746.9 00:09:48.058 complete (in ns) avg, min, max = 7591.3, 7179.2, 1114397.7 00:09:48.058 00:09:48.058 Submit histogram 00:09:48.058 ================ 00:09:48.058 Range in us Cumulative Count 00:09:48.058 9.994 - 10.043: 0.0055% ( 1) 00:09:48.058 10.092 - 10.142: 0.0110% ( 1) 00:09:48.058 10.142 - 10.191: 0.0165% ( 1) 00:09:48.058 10.240 - 10.289: 0.0274% ( 2) 00:09:48.058 10.732 - 10.782: 0.0329% ( 1) 00:09:48.058 10.782 - 10.831: 0.0987% ( 12) 00:09:48.058 10.831 - 10.880: 0.8339% ( 134) 00:09:48.058 10.880 - 10.929: 4.3230% ( 636) 00:09:48.058 10.929 - 10.978: 14.0498% ( 1773) 00:09:48.058 10.978 - 11.028: 29.4602% ( 2809) 00:09:48.058 11.028 - 11.077: 45.8525% ( 2988) 00:09:48.058 11.077 - 11.126: 61.1861% ( 2795) 00:09:48.058 11.126 - 11.175: 74.1167% ( 2357) 00:09:48.058 11.175 - 11.225: 82.6860% ( 1562) 00:09:48.058 11.225 - 11.274: 86.8938% ( 767) 00:09:48.058 11.274 - 11.323: 88.6768% ( 325) 00:09:48.058 11.323 - 11.372: 89.2473% ( 104) 00:09:48.058 11.372 - 11.422: 89.5381% ( 53) 00:09:48.058 11.422 - 11.471: 89.7465% ( 38) 00:09:48.058 11.471 - 11.520: 89.9495% ( 37) 00:09:48.058 11.520 - 11.569: 90.2183% ( 49) 00:09:48.058 11.569 - 11.618: 90.4323% ( 39) 00:09:48.058 11.618 - 11.668: 90.6737% ( 44) 00:09:48.058 11.668 - 11.717: 90.8657% ( 35) 00:09:48.058 11.717 - 11.766: 91.0193% ( 28) 00:09:48.058 11.766 - 11.815: 91.2058% ( 34) 00:09:48.058 11.815 - 11.865: 91.4856% ( 51) 00:09:48.058 11.865 - 11.914: 91.9081% ( 77) 00:09:48.059 11.914 - 11.963: 92.2811% ( 68) 00:09:48.059 11.963 - 12.012: 92.7694% ( 89) 00:09:48.059 12.012 - 12.062: 93.5868% ( 149) 00:09:48.059 12.062 - 12.111: 94.4591% ( 159) 00:09:48.059 12.111 - 12.160: 95.3314% ( 159) 00:09:48.059 12.160 - 12.209: 95.9129% ( 106) 00:09:48.059 12.209 - 12.258: 96.4011% ( 89) 00:09:48.059 12.258 - 12.308: 96.7577% ( 65) 00:09:48.059 12.308 - 12.357: 97.0211% ( 48) 00:09:48.059 12.357 - 12.406: 97.1034% ( 15) 00:09:48.059 12.406 - 12.455: 97.2241% ( 22) 00:09:48.059 12.455 - 12.505: 97.3338% ( 20) 00:09:48.059 12.505 - 12.554: 97.3777% ( 8) 00:09:48.059 12.554 - 12.603: 97.4161% ( 7) 00:09:48.059 12.603 - 12.702: 97.4490% ( 6) 00:09:48.059 12.702 - 12.800: 97.4819% ( 6) 00:09:48.059 12.800 - 12.898: 97.5642% ( 15) 00:09:48.059 12.898 - 12.997: 97.6904% ( 23) 00:09:48.059 12.997 - 13.095: 97.7891% ( 18) 00:09:48.059 13.095 - 13.194: 97.9153% ( 23) 00:09:48.059 13.194 - 13.292: 98.0250% ( 20) 00:09:48.059 13.292 - 13.391: 98.1073% ( 15) 00:09:48.059 13.391 - 13.489: 98.1567% ( 9) 00:09:48.059 13.489 - 13.588: 98.1786% ( 4) 00:09:48.059 13.588 - 13.686: 98.2170% ( 7) 00:09:48.059 13.686 - 13.785: 98.2609% ( 8) 00:09:48.059 13.785 - 13.883: 98.2938% ( 6) 00:09:48.059 13.883 - 13.982: 98.3158% ( 4) 00:09:48.059 13.982 - 14.080: 98.3487% ( 6) 00:09:48.059 14.080 - 14.178: 98.3816% ( 6) 00:09:48.059 14.178 - 14.277: 98.3871% ( 1) 00:09:48.059 14.277 - 14.375: 98.4036% ( 3) 00:09:48.059 14.375 - 14.474: 98.4145% ( 2) 00:09:48.059 14.474 - 14.572: 98.4310% ( 3) 00:09:48.059 14.572 - 14.671: 98.4420% ( 2) 00:09:48.059 14.671 - 14.769: 98.4584% ( 3) 00:09:48.059 14.769 - 14.868: 98.4639% ( 1) 00:09:48.059 14.868 - 14.966: 98.5078% ( 8) 00:09:48.059 14.966 - 15.065: 98.5407% ( 6) 00:09:48.059 15.065 - 15.163: 98.5681% ( 5) 00:09:48.059 15.163 - 15.262: 98.5791% ( 2) 00:09:48.059 15.262 - 15.360: 98.5901% ( 2) 00:09:48.059 15.360 - 15.458: 98.5956% ( 1) 00:09:48.059 15.458 - 15.557: 98.6175% ( 4) 00:09:48.059 15.557 - 15.655: 98.6340% ( 3) 00:09:48.059 15.754 - 15.852: 98.6395% ( 1) 00:09:48.059 15.852 - 15.951: 98.6449% ( 1) 00:09:48.059 15.951 - 16.049: 98.6559% ( 2) 00:09:48.059 16.049 - 16.148: 98.6669% ( 2) 00:09:48.059 16.148 - 16.246: 98.6833% ( 3) 00:09:48.059 16.246 - 16.345: 98.6943% ( 2) 00:09:48.059 16.345 - 16.443: 98.6998% ( 1) 00:09:48.059 16.542 - 16.640: 98.7821% ( 15) 00:09:48.059 16.640 - 16.738: 98.8644% ( 15) 00:09:48.059 16.738 - 16.837: 98.9961% ( 24) 00:09:48.059 16.837 - 16.935: 99.0674% ( 13) 00:09:48.059 16.935 - 17.034: 99.1551% ( 16) 00:09:48.059 17.034 - 17.132: 99.2374% ( 15) 00:09:48.059 17.132 - 17.231: 99.3142% ( 14) 00:09:48.059 17.231 - 17.329: 99.3856% ( 13) 00:09:48.059 17.329 - 17.428: 99.4514% ( 12) 00:09:48.059 17.428 - 17.526: 99.4953% ( 8) 00:09:48.059 17.526 - 17.625: 99.5501% ( 10) 00:09:48.059 17.625 - 17.723: 99.5995% ( 9) 00:09:48.059 17.723 - 17.822: 99.6434% ( 8) 00:09:48.059 17.822 - 17.920: 99.6708% ( 5) 00:09:48.059 17.920 - 18.018: 99.6983% ( 5) 00:09:48.059 18.018 - 18.117: 99.7202% ( 4) 00:09:48.059 18.117 - 18.215: 99.7422% ( 4) 00:09:48.059 18.215 - 18.314: 99.7476% ( 1) 00:09:48.059 18.314 - 18.412: 99.7531% ( 1) 00:09:48.059 18.412 - 18.511: 99.7641% ( 2) 00:09:48.059 18.609 - 18.708: 99.7806% ( 3) 00:09:48.059 18.708 - 18.806: 99.7860% ( 1) 00:09:48.059 18.806 - 18.905: 99.7915% ( 1) 00:09:48.059 18.905 - 19.003: 99.7970% ( 1) 00:09:48.059 19.298 - 19.397: 99.8025% ( 1) 00:09:48.059 19.397 - 19.495: 99.8080% ( 1) 00:09:48.059 19.495 - 19.594: 99.8135% ( 1) 00:09:48.059 19.692 - 19.791: 99.8244% ( 2) 00:09:48.059 19.988 - 20.086: 99.8299% ( 1) 00:09:48.059 20.283 - 20.382: 99.8354% ( 1) 00:09:48.059 20.382 - 20.480: 99.8409% ( 1) 00:09:48.059 20.480 - 20.578: 99.8464% ( 1) 00:09:48.059 20.677 - 20.775: 99.8519% ( 1) 00:09:48.059 20.972 - 21.071: 99.8574% ( 1) 00:09:48.059 21.169 - 21.268: 99.8628% ( 1) 00:09:48.059 21.366 - 21.465: 99.8683% ( 1) 00:09:48.059 21.465 - 21.563: 99.8738% ( 1) 00:09:48.059 21.563 - 21.662: 99.8793% ( 1) 00:09:48.059 21.662 - 21.760: 99.8848% ( 1) 00:09:48.059 21.858 - 21.957: 99.8903% ( 1) 00:09:48.059 23.138 - 23.237: 99.8958% ( 1) 00:09:48.059 23.434 - 23.532: 99.9013% ( 1) 00:09:48.059 23.532 - 23.631: 99.9067% ( 1) 00:09:48.059 23.631 - 23.729: 99.9177% ( 2) 00:09:48.059 23.926 - 24.025: 99.9232% ( 1) 00:09:48.059 24.025 - 24.123: 99.9287% ( 1) 00:09:48.059 26.585 - 26.782: 99.9342% ( 1) 00:09:48.059 26.782 - 26.978: 99.9397% ( 1) 00:09:48.059 31.508 - 31.705: 99.9451% ( 1) 00:09:48.059 31.902 - 32.098: 99.9506% ( 1) 00:09:48.059 33.280 - 33.477: 99.9616% ( 2) 00:09:48.059 36.825 - 37.022: 99.9671% ( 1) 00:09:48.059 42.338 - 42.535: 99.9726% ( 1) 00:09:48.059 43.323 - 43.520: 99.9781% ( 1) 00:09:48.059 43.520 - 43.717: 99.9835% ( 1) 00:09:48.059 48.837 - 49.034: 99.9890% ( 1) 00:09:48.059 50.412 - 50.806: 99.9945% ( 1) 00:09:48.059 89.403 - 89.797: 100.0000% ( 1) 00:09:48.059 00:09:48.059 Complete histogram 00:09:48.059 ================== 00:09:48.059 Range in us Cumulative Count 00:09:48.059 7.138 - 7.188: 0.0165% ( 3) 00:09:48.059 7.188 - 7.237: 0.9491% ( 170) 00:09:48.059 7.237 - 7.286: 7.1045% ( 1122) 00:09:48.059 7.286 - 7.335: 24.2374% ( 3123) 00:09:48.059 7.335 - 7.385: 43.7020% ( 3548) 00:09:48.059 7.385 - 7.434: 60.5387% ( 3069) 00:09:48.059 7.434 - 7.483: 78.6811% ( 3307) 00:09:48.059 7.483 - 7.532: 89.4174% ( 1957) 00:09:48.059 7.532 - 7.582: 94.2341% ( 878) 00:09:48.059 7.582 - 7.631: 95.9458% ( 312) 00:09:48.059 7.631 - 7.680: 96.6700% ( 132) 00:09:48.059 7.680 - 7.729: 97.1527% ( 88) 00:09:48.059 7.729 - 7.778: 97.3502% ( 36) 00:09:48.059 7.778 - 7.828: 97.4325% ( 15) 00:09:48.059 7.828 - 7.877: 97.4764% ( 8) 00:09:48.059 7.877 - 7.926: 97.5093% ( 6) 00:09:48.059 7.926 - 7.975: 97.5697% ( 11) 00:09:48.059 7.975 - 8.025: 97.6849% ( 21) 00:09:48.059 8.025 - 8.074: 97.8440% ( 29) 00:09:48.059 8.074 - 8.123: 98.0305% ( 34) 00:09:48.059 8.123 - 8.172: 98.1786% ( 27) 00:09:48.059 8.172 - 8.222: 98.3103% ( 24) 00:09:48.059 8.222 - 8.271: 98.3926% ( 15) 00:09:48.059 8.271 - 8.320: 98.4584% ( 12) 00:09:48.059 8.320 - 8.369: 98.4858% ( 5) 00:09:48.059 8.369 - 8.418: 98.5023% ( 3) 00:09:48.059 8.418 - 8.468: 98.5297% ( 5) 00:09:48.059 8.468 - 8.517: 98.5352% ( 1) 00:09:48.059 8.566 - 8.615: 98.5407% ( 1) 00:09:48.059 8.615 - 8.665: 98.5517% ( 2) 00:09:48.059 8.665 - 8.714: 98.5572% ( 1) 00:09:48.059 8.763 - 8.812: 98.5627% ( 1) 00:09:48.059 8.812 - 8.862: 98.5681% ( 1) 00:09:48.059 8.911 - 8.960: 98.5736% ( 1) 00:09:48.059 9.058 - 9.108: 98.5846% ( 2) 00:09:48.059 9.403 - 9.452: 98.5901% ( 1) 00:09:48.059 9.502 - 9.551: 98.5956% ( 1) 00:09:48.059 9.846 - 9.895: 98.6011% ( 1) 00:09:48.059 9.895 - 9.945: 98.6120% ( 2) 00:09:48.059 9.945 - 9.994: 98.6175% ( 1) 00:09:48.059 10.092 - 10.142: 98.6230% ( 1) 00:09:48.059 10.142 - 10.191: 98.6285% ( 1) 00:09:48.059 10.191 - 10.240: 98.6340% ( 1) 00:09:48.059 10.437 - 10.486: 98.6395% ( 1) 00:09:48.059 10.683 - 10.732: 98.6449% ( 1) 00:09:48.059 10.929 - 10.978: 98.6504% ( 1) 00:09:48.059 11.323 - 11.372: 98.6559% ( 1) 00:09:48.059 11.668 - 11.717: 98.6614% ( 1) 00:09:48.059 11.766 - 11.815: 98.6669% ( 1) 00:09:48.059 12.012 - 12.062: 98.6724% ( 1) 00:09:48.059 12.111 - 12.160: 98.6779% ( 1) 00:09:48.059 12.160 - 12.209: 98.6888% ( 2) 00:09:48.059 12.603 - 12.702: 98.6943% ( 1) 00:09:48.059 12.702 - 12.800: 98.6998% ( 1) 00:09:48.059 12.800 - 12.898: 98.7163% ( 3) 00:09:48.059 12.898 - 12.997: 98.7437% ( 5) 00:09:48.059 12.997 - 13.095: 98.8534% ( 20) 00:09:48.059 13.095 - 13.194: 98.9192% ( 12) 00:09:48.059 13.194 - 13.292: 98.9686% ( 9) 00:09:48.059 13.292 - 13.391: 99.0619% ( 17) 00:09:48.059 13.391 - 13.489: 99.1497% ( 16) 00:09:48.059 13.489 - 13.588: 99.2649% ( 21) 00:09:48.059 13.588 - 13.686: 99.3526% ( 16) 00:09:48.059 13.686 - 13.785: 99.3746% ( 4) 00:09:48.059 13.785 - 13.883: 99.4733% ( 18) 00:09:48.059 13.883 - 13.982: 99.5611% ( 16) 00:09:48.059 13.982 - 14.080: 99.5995% ( 7) 00:09:48.059 14.080 - 14.178: 99.6379% ( 7) 00:09:48.059 14.178 - 14.277: 99.6708% ( 6) 00:09:48.059 14.277 - 14.375: 99.6873% ( 3) 00:09:48.059 14.375 - 14.474: 99.7202% ( 6) 00:09:48.059 14.474 - 14.572: 99.7476% ( 5) 00:09:48.059 14.572 - 14.671: 99.7586% ( 2) 00:09:48.059 14.671 - 14.769: 99.7806% ( 4) 00:09:48.059 14.769 - 14.868: 99.7970% ( 3) 00:09:48.059 14.868 - 14.966: 99.8025% ( 1) 00:09:48.059 14.966 - 15.065: 99.8080% ( 1) 00:09:48.060 15.262 - 15.360: 99.8135% ( 1) 00:09:48.060 15.458 - 15.557: 99.8190% ( 1) 00:09:48.060 15.754 - 15.852: 99.8244% ( 1) 00:09:48.060 16.049 - 16.148: 99.8409% ( 3) 00:09:48.060 16.148 - 16.246: 99.8464% ( 1) 00:09:48.060 16.246 - 16.345: 99.8574% ( 2) 00:09:48.060 16.345 - 16.443: 99.8738% ( 3) 00:09:48.060 16.542 - 16.640: 99.8848% ( 2) 00:09:48.060 16.640 - 16.738: 99.8958% ( 2) 00:09:48.060 16.738 - 16.837: 99.9013% ( 1) 00:09:48.060 17.132 - 17.231: 99.9067% ( 1) 00:09:48.060 17.231 - 17.329: 99.9122% ( 1) 00:09:48.060 17.329 - 17.428: 99.9232% ( 2) 00:09:48.060 17.822 - 17.920: 99.9287% ( 1) 00:09:48.060 18.412 - 18.511: 99.9342% ( 1) 00:09:48.060 18.609 - 18.708: 99.9451% ( 2) 00:09:48.060 18.905 - 19.003: 99.9506% ( 1) 00:09:48.060 19.298 - 19.397: 99.9561% ( 1) 00:09:48.060 19.397 - 19.495: 99.9616% ( 1) 00:09:48.060 19.791 - 19.889: 99.9671% ( 1) 00:09:48.060 20.382 - 20.480: 99.9726% ( 1) 00:09:48.060 20.480 - 20.578: 99.9781% ( 1) 00:09:48.060 23.138 - 23.237: 99.9835% ( 1) 00:09:48.060 220.554 - 222.129: 99.9890% ( 1) 00:09:48.060 253.637 - 255.212: 99.9945% ( 1) 00:09:48.060 1109.071 - 1115.372: 100.0000% ( 1) 00:09:48.060 00:09:48.060 00:09:48.060 real 0m1.202s 00:09:48.060 user 0m1.064s 00:09:48.060 sys 0m0.098s 00:09:48.060 ************************************ 00:09:48.060 END TEST nvme_overhead 00:09:48.060 ************************************ 00:09:48.060 15:51:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:48.060 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:09:48.060 15:51:59 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:48.060 15:51:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:48.060 15:51:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:48.060 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:09:48.060 ************************************ 00:09:48.060 START TEST nvme_arbitration 00:09:48.060 ************************************ 00:09:48.060 15:51:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:51.358 Initializing NVMe Controllers 00:09:51.358 Attached to 0000:00:09.0 00:09:51.358 Attached to 0000:00:06.0 00:09:51.358 Attached to 0000:00:07.0 00:09:51.358 Attached to 0000:00:08.0 00:09:51.358 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:09:51.358 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:09:51.358 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:09:51.358 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:51.358 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:51.358 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:51.358 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:51.358 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:51.358 Initialization complete. Launching workers. 00:09:51.358 Starting thread on core 2 with urgent priority queue 00:09:51.358 Starting thread on core 1 with urgent priority queue 00:09:51.358 Starting thread on core 3 with urgent priority queue 00:09:51.358 Starting thread on core 0 with urgent priority queue 00:09:51.358 QEMU NVMe Ctrl (12343 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:09:51.358 QEMU NVMe Ctrl (12342 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:09:51.358 QEMU NVMe Ctrl (12340 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:09:51.358 QEMU NVMe Ctrl (12342 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:09:51.358 QEMU NVMe Ctrl (12341 ) core 2: 917.33 IO/s 109.01 secs/100000 ios 00:09:51.358 QEMU NVMe Ctrl (12342 ) core 3: 917.33 IO/s 109.01 secs/100000 ios 00:09:51.358 ======================================================== 00:09:51.358 00:09:51.358 00:09:51.358 real 0m3.407s 00:09:51.358 user 0m9.557s 00:09:51.358 sys 0m0.111s 00:09:51.358 15:52:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:51.358 ************************************ 00:09:51.358 END TEST nvme_arbitration 00:09:51.358 ************************************ 00:09:51.358 15:52:02 -- common/autotest_common.sh@10 -- # set +x 00:09:51.358 15:52:02 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:51.358 15:52:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:51.358 15:52:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.358 15:52:02 -- common/autotest_common.sh@10 -- # set +x 00:09:51.358 ************************************ 00:09:51.358 START TEST nvme_single_aen 00:09:51.358 ************************************ 00:09:51.358 15:52:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:51.358 [2024-11-29 15:52:02.728803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:51.358 [2024-11-29 15:52:02.728980] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.619 [2024-11-29 15:52:02.869168] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:51.619 [2024-11-29 15:52:02.871915] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:51.619 [2024-11-29 15:52:02.873943] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:51.619 [2024-11-29 15:52:02.875144] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:51.619 Asynchronous Event Request test 00:09:51.619 Attached to 0000:00:09.0 00:09:51.619 Attached to 0000:00:06.0 00:09:51.619 Attached to 0000:00:07.0 00:09:51.619 Attached to 0000:00:08.0 00:09:51.619 Reset controller to setup AER completions for this process 00:09:51.619 Registering asynchronous event callbacks... 00:09:51.619 Getting orig temperature thresholds of all controllers 00:09:51.619 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.619 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.619 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.619 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.619 Setting all controllers temperature threshold low to trigger AER 00:09:51.619 Waiting for all controllers temperature threshold to be set lower 00:09:51.619 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.619 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:51.619 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.619 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:51.619 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.619 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:51.620 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.620 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:51.620 Waiting for all controllers to trigger AER and reset threshold 00:09:51.620 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.620 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.620 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.620 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.620 Cleaning up... 00:09:51.620 00:09:51.620 real 0m0.214s 00:09:51.620 user 0m0.065s 00:09:51.620 sys 0m0.103s 00:09:51.620 15:52:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:51.620 ************************************ 00:09:51.620 END TEST nvme_single_aen 00:09:51.620 15:52:02 -- common/autotest_common.sh@10 -- # set +x 00:09:51.620 ************************************ 00:09:51.620 15:52:02 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:51.620 15:52:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:51.620 15:52:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.620 15:52:02 -- common/autotest_common.sh@10 -- # set +x 00:09:51.620 ************************************ 00:09:51.620 START TEST nvme_doorbell_aers 00:09:51.620 ************************************ 00:09:51.620 15:52:02 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:09:51.620 15:52:02 -- nvme/nvme.sh@70 -- # bdfs=() 00:09:51.620 15:52:02 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:51.620 15:52:02 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:51.620 15:52:02 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:51.620 15:52:02 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:51.620 15:52:02 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:51.620 15:52:02 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:51.620 15:52:02 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:51.620 15:52:02 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:51.620 15:52:03 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:51.620 15:52:03 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:51.620 15:52:03 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:51.620 15:52:03 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:51.881 [2024-11-29 15:52:03.244317] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:01.885 Executing: test_write_invalid_db 00:10:01.885 Waiting for AER completion... 00:10:01.885 Failure: test_write_invalid_db 00:10:01.885 00:10:01.885 Executing: test_invalid_db_write_overflow_sq 00:10:01.885 Waiting for AER completion... 00:10:01.885 Failure: test_invalid_db_write_overflow_sq 00:10:01.885 00:10:01.885 Executing: test_invalid_db_write_overflow_cq 00:10:01.885 Waiting for AER completion... 00:10:01.885 Failure: test_invalid_db_write_overflow_cq 00:10:01.885 00:10:01.885 15:52:13 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:01.885 15:52:13 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:01.885 [2024-11-29 15:52:13.270778] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:11.855 Executing: test_write_invalid_db 00:10:11.855 Waiting for AER completion... 00:10:11.855 Failure: test_write_invalid_db 00:10:11.855 00:10:11.855 Executing: test_invalid_db_write_overflow_sq 00:10:11.855 Waiting for AER completion... 00:10:11.855 Failure: test_invalid_db_write_overflow_sq 00:10:11.855 00:10:11.855 Executing: test_invalid_db_write_overflow_cq 00:10:11.855 Waiting for AER completion... 00:10:11.855 Failure: test_invalid_db_write_overflow_cq 00:10:11.855 00:10:11.855 15:52:23 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:11.855 15:52:23 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:12.113 [2024-11-29 15:52:23.290400] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:22.083 Executing: test_write_invalid_db 00:10:22.083 Waiting for AER completion... 00:10:22.083 Failure: test_write_invalid_db 00:10:22.083 00:10:22.083 Executing: test_invalid_db_write_overflow_sq 00:10:22.083 Waiting for AER completion... 00:10:22.083 Failure: test_invalid_db_write_overflow_sq 00:10:22.083 00:10:22.083 Executing: test_invalid_db_write_overflow_cq 00:10:22.083 Waiting for AER completion... 00:10:22.083 Failure: test_invalid_db_write_overflow_cq 00:10:22.083 00:10:22.083 15:52:33 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:22.083 15:52:33 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:22.083 [2024-11-29 15:52:33.348337] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:32.179 Executing: test_write_invalid_db 00:10:32.179 Waiting for AER completion... 00:10:32.179 Failure: test_write_invalid_db 00:10:32.179 00:10:32.179 Executing: test_invalid_db_write_overflow_sq 00:10:32.179 Waiting for AER completion... 00:10:32.179 Failure: test_invalid_db_write_overflow_sq 00:10:32.179 00:10:32.179 Executing: test_invalid_db_write_overflow_cq 00:10:32.179 Waiting for AER completion... 00:10:32.179 Failure: test_invalid_db_write_overflow_cq 00:10:32.179 00:10:32.179 00:10:32.179 real 0m40.186s 00:10:32.179 user 0m34.204s 00:10:32.179 sys 0m5.579s 00:10:32.179 15:52:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.179 15:52:43 -- common/autotest_common.sh@10 -- # set +x 00:10:32.179 ************************************ 00:10:32.179 END TEST nvme_doorbell_aers 00:10:32.179 ************************************ 00:10:32.179 15:52:43 -- nvme/nvme.sh@97 -- # uname 00:10:32.179 15:52:43 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:32.179 15:52:43 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:32.179 15:52:43 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:32.179 15:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.179 15:52:43 -- common/autotest_common.sh@10 -- # set +x 00:10:32.179 ************************************ 00:10:32.179 START TEST nvme_multi_aen 00:10:32.179 ************************************ 00:10:32.179 15:52:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:32.179 [2024-11-29 15:52:43.224642] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:32.179 [2024-11-29 15:52:43.224789] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.179 [2024-11-29 15:52:43.352452] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:32.179 [2024-11-29 15:52:43.352610] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:32.179 [2024-11-29 15:52:43.352648] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:32.179 [2024-11-29 15:52:43.352660] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:32.179 [2024-11-29 15:52:43.353729] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:32.179 [2024-11-29 15:52:43.353748] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:32.179 [2024-11-29 15:52:43.353766] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:32.179 [2024-11-29 15:52:43.353776] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:32.179 [2024-11-29 15:52:43.354676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:32.179 [2024-11-29 15:52:43.354691] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:32.179 [2024-11-29 15:52:43.354706] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:32.179 [2024-11-29 15:52:43.354715] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:32.179 [2024-11-29 15:52:43.355621] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:32.179 [2024-11-29 15:52:43.355701] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:32.179 [2024-11-29 15:52:43.355759] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:32.179 [2024-11-29 15:52:43.355789] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63748) is not found. Dropping the request. 00:10:32.179 [2024-11-29 15:52:43.366139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:32.179 [2024-11-29 15:52:43.366610] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 Child process pid: 64269 00:10:32.179 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.179 [Child] Asynchronous Event Request test 00:10:32.179 [Child] Attached to 0000:00:09.0 00:10:32.179 [Child] Attached to 0000:00:06.0 00:10:32.179 [Child] Attached to 0000:00:07.0 00:10:32.179 [Child] Attached to 0000:00:08.0 00:10:32.179 [Child] Registering asynchronous event callbacks... 00:10:32.179 [Child] Getting orig temperature thresholds of all controllers 00:10:32.179 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.179 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.179 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.179 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.179 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:32.179 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.179 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.179 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.179 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.179 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.179 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.179 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.179 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.179 [Child] Cleaning up... 00:10:32.179 Asynchronous Event Request test 00:10:32.179 Attached to 0000:00:09.0 00:10:32.179 Attached to 0000:00:06.0 00:10:32.179 Attached to 0000:00:07.0 00:10:32.179 Attached to 0000:00:08.0 00:10:32.179 Reset controller to setup AER completions for this process 00:10:32.179 Registering asynchronous event callbacks... 00:10:32.180 Getting orig temperature thresholds of all controllers 00:10:32.180 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.180 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.180 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.180 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.180 Setting all controllers temperature threshold low to trigger AER 00:10:32.180 Waiting for all controllers temperature threshold to be set lower 00:10:32.180 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.180 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:32.180 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.180 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:32.180 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.180 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:32.180 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.180 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:32.180 Waiting for all controllers to trigger AER and reset threshold 00:10:32.180 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.180 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.180 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.180 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.180 Cleaning up... 00:10:32.180 00:10:32.180 real 0m0.384s 00:10:32.180 user 0m0.110s 00:10:32.180 sys 0m0.168s 00:10:32.180 15:52:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.180 15:52:43 -- common/autotest_common.sh@10 -- # set +x 00:10:32.180 ************************************ 00:10:32.180 END TEST nvme_multi_aen 00:10:32.180 ************************************ 00:10:32.440 15:52:43 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:32.440 15:52:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:32.440 15:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.440 15:52:43 -- common/autotest_common.sh@10 -- # set +x 00:10:32.440 ************************************ 00:10:32.440 START TEST nvme_startup 00:10:32.440 ************************************ 00:10:32.440 15:52:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:32.440 Initializing NVMe Controllers 00:10:32.440 Attached to 0000:00:09.0 00:10:32.440 Attached to 0000:00:06.0 00:10:32.440 Attached to 0000:00:07.0 00:10:32.440 Attached to 0000:00:08.0 00:10:32.440 Initialization complete. 00:10:32.440 Time used:144481.984 (us). 00:10:32.440 ************************************ 00:10:32.440 END TEST nvme_startup 00:10:32.440 ************************************ 00:10:32.440 00:10:32.440 real 0m0.203s 00:10:32.440 user 0m0.057s 00:10:32.440 sys 0m0.097s 00:10:32.440 15:52:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.440 15:52:43 -- common/autotest_common.sh@10 -- # set +x 00:10:32.440 15:52:43 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:32.440 15:52:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:32.440 15:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.440 15:52:43 -- common/autotest_common.sh@10 -- # set +x 00:10:32.440 ************************************ 00:10:32.440 START TEST nvme_multi_secondary 00:10:32.440 ************************************ 00:10:32.440 15:52:43 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:10:32.440 15:52:43 -- nvme/nvme.sh@52 -- # pid0=64320 00:10:32.440 15:52:43 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:32.440 15:52:43 -- nvme/nvme.sh@54 -- # pid1=64321 00:10:32.440 15:52:43 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:32.440 15:52:43 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:36.626 Initializing NVMe Controllers 00:10:36.626 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:36.626 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:36.626 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:36.626 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:36.626 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:36.626 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:36.626 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:36.626 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:36.626 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:36.626 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:36.626 Initialization complete. Launching workers. 00:10:36.626 ======================================================== 00:10:36.626 Latency(us) 00:10:36.626 Device Information : IOPS MiB/s Average min max 00:10:36.626 PCIE (0000:00:09.0) NSID 1 from core 2: 3265.76 12.76 4898.95 830.82 12168.21 00:10:36.626 PCIE (0000:00:06.0) NSID 1 from core 2: 3265.76 12.76 4897.96 842.56 12345.97 00:10:36.626 PCIE (0000:00:07.0) NSID 1 from core 2: 3265.76 12.76 4899.12 765.35 12000.91 00:10:36.626 PCIE (0000:00:08.0) NSID 1 from core 2: 3265.76 12.76 4898.83 848.66 14682.12 00:10:36.626 PCIE (0000:00:08.0) NSID 2 from core 2: 3265.76 12.76 4899.41 843.50 14657.36 00:10:36.626 PCIE (0000:00:08.0) NSID 3 from core 2: 3265.76 12.76 4899.00 852.72 14219.81 00:10:36.626 ======================================================== 00:10:36.626 Total : 19594.56 76.54 4898.88 765.35 14682.12 00:10:36.626 00:10:36.626 15:52:47 -- nvme/nvme.sh@56 -- # wait 64320 00:10:36.626 Initializing NVMe Controllers 00:10:36.626 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:36.626 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:36.626 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:36.626 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:36.626 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:36.626 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:36.626 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:36.626 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:36.626 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:36.626 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:36.626 Initialization complete. Launching workers. 00:10:36.626 ======================================================== 00:10:36.626 Latency(us) 00:10:36.626 Device Information : IOPS MiB/s Average min max 00:10:36.626 PCIE (0000:00:09.0) NSID 1 from core 1: 7766.91 30.34 2059.60 939.34 5746.37 00:10:36.626 PCIE (0000:00:06.0) NSID 1 from core 1: 7766.91 30.34 2058.75 953.01 5456.80 00:10:36.626 PCIE (0000:00:07.0) NSID 1 from core 1: 7766.91 30.34 2059.62 992.39 5132.51 00:10:36.626 PCIE (0000:00:08.0) NSID 1 from core 1: 7766.91 30.34 2059.55 1038.53 5263.14 00:10:36.626 PCIE (0000:00:08.0) NSID 2 from core 1: 7766.91 30.34 2059.54 1080.53 5233.69 00:10:36.626 PCIE (0000:00:08.0) NSID 3 from core 1: 7766.91 30.34 2059.49 974.00 5418.32 00:10:36.626 ======================================================== 00:10:36.626 Total : 46601.44 182.04 2059.43 939.34 5746.37 00:10:36.626 00:10:37.997 Initializing NVMe Controllers 00:10:37.997 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:37.997 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:37.997 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:37.997 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:37.997 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:37.997 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:37.997 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:37.997 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:37.997 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:37.997 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:37.997 Initialization complete. Launching workers. 00:10:37.997 ======================================================== 00:10:37.997 Latency(us) 00:10:37.997 Device Information : IOPS MiB/s Average min max 00:10:37.997 PCIE (0000:00:09.0) NSID 1 from core 0: 11249.76 43.94 1421.91 705.99 6558.55 00:10:37.997 PCIE (0000:00:06.0) NSID 1 from core 0: 11249.76 43.94 1421.13 684.76 6447.93 00:10:37.997 PCIE (0000:00:07.0) NSID 1 from core 0: 11249.76 43.94 1421.97 680.03 6172.22 00:10:37.997 PCIE (0000:00:08.0) NSID 1 from core 0: 11249.76 43.94 1421.99 704.42 5744.40 00:10:37.998 PCIE (0000:00:08.0) NSID 2 from core 0: 11249.76 43.94 1422.01 707.60 5489.39 00:10:37.998 PCIE (0000:00:08.0) NSID 3 from core 0: 11249.76 43.94 1422.05 699.92 5855.18 00:10:37.998 ======================================================== 00:10:37.998 Total : 67498.53 263.67 1421.84 680.03 6558.55 00:10:37.998 00:10:37.998 15:52:49 -- nvme/nvme.sh@57 -- # wait 64321 00:10:37.998 15:52:49 -- nvme/nvme.sh@61 -- # pid0=64394 00:10:37.998 15:52:49 -- nvme/nvme.sh@63 -- # pid1=64395 00:10:37.998 15:52:49 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:37.998 15:52:49 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:37.998 15:52:49 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:41.278 Initializing NVMe Controllers 00:10:41.278 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:41.278 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:41.278 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:41.278 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:41.278 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:41.278 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:41.278 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:41.278 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:41.278 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:41.278 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:41.278 Initialization complete. Launching workers. 00:10:41.278 ======================================================== 00:10:41.278 Latency(us) 00:10:41.278 Device Information : IOPS MiB/s Average min max 00:10:41.278 PCIE (0000:00:09.0) NSID 1 from core 1: 8079.76 31.56 1979.84 730.61 5569.61 00:10:41.278 PCIE (0000:00:06.0) NSID 1 from core 1: 8079.76 31.56 1978.95 707.56 5625.53 00:10:41.278 PCIE (0000:00:07.0) NSID 1 from core 1: 8079.76 31.56 1979.86 727.80 5627.24 00:10:41.278 PCIE (0000:00:08.0) NSID 1 from core 1: 8079.76 31.56 1979.83 730.51 5908.74 00:10:41.278 PCIE (0000:00:08.0) NSID 2 from core 1: 8079.76 31.56 1979.81 724.85 6257.38 00:10:41.278 PCIE (0000:00:08.0) NSID 3 from core 1: 8079.76 31.56 1979.94 732.52 5802.12 00:10:41.278 ======================================================== 00:10:41.278 Total : 48478.55 189.37 1979.70 707.56 6257.38 00:10:41.278 00:10:41.278 Initializing NVMe Controllers 00:10:41.278 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:41.278 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:41.278 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:41.278 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:41.278 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:41.278 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:41.278 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:41.278 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:41.278 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:41.278 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:41.278 Initialization complete. Launching workers. 00:10:41.278 ======================================================== 00:10:41.278 Latency(us) 00:10:41.278 Device Information : IOPS MiB/s Average min max 00:10:41.278 PCIE (0000:00:09.0) NSID 1 from core 0: 8040.94 31.41 1989.44 714.21 5970.31 00:10:41.278 PCIE (0000:00:06.0) NSID 1 from core 0: 8040.94 31.41 1988.45 698.55 6109.28 00:10:41.278 PCIE (0000:00:07.0) NSID 1 from core 0: 8040.94 31.41 1989.35 711.73 6245.55 00:10:41.278 PCIE (0000:00:08.0) NSID 1 from core 0: 8040.94 31.41 1989.51 723.75 5612.72 00:10:41.278 PCIE (0000:00:08.0) NSID 2 from core 0: 8040.94 31.41 1989.56 722.95 5824.43 00:10:41.278 PCIE (0000:00:08.0) NSID 3 from core 0: 8040.94 31.41 1989.57 724.24 5539.26 00:10:41.278 ======================================================== 00:10:41.278 Total : 48245.61 188.46 1989.31 698.55 6245.55 00:10:41.278 00:10:43.812 Initializing NVMe Controllers 00:10:43.812 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:43.812 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:43.812 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:43.812 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:43.812 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:43.812 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:43.812 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:43.812 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:43.812 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:43.812 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:43.812 Initialization complete. Launching workers. 00:10:43.812 ======================================================== 00:10:43.812 Latency(us) 00:10:43.812 Device Information : IOPS MiB/s Average min max 00:10:43.812 PCIE (0000:00:09.0) NSID 1 from core 2: 4892.58 19.11 3269.41 738.53 12179.90 00:10:43.812 PCIE (0000:00:06.0) NSID 1 from core 2: 4892.58 19.11 3268.11 731.71 12495.10 00:10:43.812 PCIE (0000:00:07.0) NSID 1 from core 2: 4892.58 19.11 3269.81 703.95 13023.91 00:10:43.812 PCIE (0000:00:08.0) NSID 1 from core 2: 4892.58 19.11 3269.60 732.75 12514.56 00:10:43.812 PCIE (0000:00:08.0) NSID 2 from core 2: 4892.58 19.11 3269.55 688.11 12487.89 00:10:43.812 PCIE (0000:00:08.0) NSID 3 from core 2: 4892.58 19.11 3269.34 633.28 12092.03 00:10:43.812 ======================================================== 00:10:43.812 Total : 29355.50 114.67 3269.30 633.28 13023.91 00:10:43.812 00:10:43.812 ************************************ 00:10:43.812 END TEST nvme_multi_secondary 00:10:43.812 ************************************ 00:10:43.812 15:52:54 -- nvme/nvme.sh@65 -- # wait 64394 00:10:43.812 15:52:54 -- nvme/nvme.sh@66 -- # wait 64395 00:10:43.812 00:10:43.812 real 0m10.797s 00:10:43.812 user 0m18.665s 00:10:43.812 sys 0m0.611s 00:10:43.812 15:52:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:43.812 15:52:54 -- common/autotest_common.sh@10 -- # set +x 00:10:43.812 15:52:54 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:43.812 15:52:54 -- nvme/nvme.sh@102 -- # kill_stub 00:10:43.812 15:52:54 -- common/autotest_common.sh@1075 -- # [[ -e /proc/63331 ]] 00:10:43.812 15:52:54 -- common/autotest_common.sh@1076 -- # kill 63331 00:10:43.812 15:52:54 -- common/autotest_common.sh@1077 -- # wait 63331 00:10:43.812 [2024-11-29 15:52:55.020064] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:43.812 [2024-11-29 15:52:55.020246] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:43.812 [2024-11-29 15:52:55.020263] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:43.812 [2024-11-29 15:52:55.020274] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:44.753 [2024-11-29 15:52:56.027692] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:44.753 [2024-11-29 15:52:56.027928] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:44.753 [2024-11-29 15:52:56.027948] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:44.753 [2024-11-29 15:52:56.027960] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:45.698 [2024-11-29 15:52:57.035347] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:45.698 [2024-11-29 15:52:57.035396] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:45.698 [2024-11-29 15:52:57.035406] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:45.698 [2024-11-29 15:52:57.035417] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:46.642 [2024-11-29 15:52:58.045550] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:46.642 [2024-11-29 15:52:58.045604] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:46.642 [2024-11-29 15:52:58.045615] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:46.642 [2024-11-29 15:52:58.045627] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64268) is not found. Dropping the request. 00:10:46.903 15:52:58 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:10:46.903 15:52:58 -- common/autotest_common.sh@1083 -- # echo 2 00:10:46.903 15:52:58 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:46.903 15:52:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:46.903 15:52:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.903 15:52:58 -- common/autotest_common.sh@10 -- # set +x 00:10:46.903 ************************************ 00:10:46.903 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:46.903 ************************************ 00:10:46.903 15:52:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:46.903 * Looking for test storage... 00:10:46.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:46.903 15:52:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:46.903 15:52:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:46.903 15:52:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:47.165 15:52:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:47.165 15:52:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:47.165 15:52:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:47.165 15:52:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:47.165 15:52:58 -- scripts/common.sh@335 -- # IFS=.-: 00:10:47.165 15:52:58 -- scripts/common.sh@335 -- # read -ra ver1 00:10:47.165 15:52:58 -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.165 15:52:58 -- scripts/common.sh@336 -- # read -ra ver2 00:10:47.165 15:52:58 -- scripts/common.sh@337 -- # local 'op=<' 00:10:47.165 15:52:58 -- scripts/common.sh@339 -- # ver1_l=2 00:10:47.165 15:52:58 -- scripts/common.sh@340 -- # ver2_l=1 00:10:47.165 15:52:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:47.165 15:52:58 -- scripts/common.sh@343 -- # case "$op" in 00:10:47.165 15:52:58 -- scripts/common.sh@344 -- # : 1 00:10:47.165 15:52:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:47.165 15:52:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.165 15:52:58 -- scripts/common.sh@364 -- # decimal 1 00:10:47.165 15:52:58 -- scripts/common.sh@352 -- # local d=1 00:10:47.165 15:52:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.165 15:52:58 -- scripts/common.sh@354 -- # echo 1 00:10:47.165 15:52:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:47.165 15:52:58 -- scripts/common.sh@365 -- # decimal 2 00:10:47.165 15:52:58 -- scripts/common.sh@352 -- # local d=2 00:10:47.165 15:52:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.165 15:52:58 -- scripts/common.sh@354 -- # echo 2 00:10:47.165 15:52:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:47.165 15:52:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:47.165 15:52:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:47.165 15:52:58 -- scripts/common.sh@367 -- # return 0 00:10:47.165 15:52:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.165 15:52:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:47.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.165 --rc genhtml_branch_coverage=1 00:10:47.165 --rc genhtml_function_coverage=1 00:10:47.165 --rc genhtml_legend=1 00:10:47.165 --rc geninfo_all_blocks=1 00:10:47.165 --rc geninfo_unexecuted_blocks=1 00:10:47.165 00:10:47.165 ' 00:10:47.165 15:52:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:47.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.165 --rc genhtml_branch_coverage=1 00:10:47.165 --rc genhtml_function_coverage=1 00:10:47.165 --rc genhtml_legend=1 00:10:47.165 --rc geninfo_all_blocks=1 00:10:47.165 --rc geninfo_unexecuted_blocks=1 00:10:47.165 00:10:47.165 ' 00:10:47.165 15:52:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:47.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.165 --rc genhtml_branch_coverage=1 00:10:47.165 --rc genhtml_function_coverage=1 00:10:47.165 --rc genhtml_legend=1 00:10:47.165 --rc geninfo_all_blocks=1 00:10:47.165 --rc geninfo_unexecuted_blocks=1 00:10:47.165 00:10:47.165 ' 00:10:47.165 15:52:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:47.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.165 --rc genhtml_branch_coverage=1 00:10:47.165 --rc genhtml_function_coverage=1 00:10:47.165 --rc genhtml_legend=1 00:10:47.165 --rc geninfo_all_blocks=1 00:10:47.165 --rc geninfo_unexecuted_blocks=1 00:10:47.165 00:10:47.165 ' 00:10:47.166 15:52:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:47.166 15:52:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:47.166 15:52:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:47.166 15:52:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:47.166 15:52:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:47.166 15:52:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:47.166 15:52:58 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:47.166 15:52:58 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:47.166 15:52:58 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:47.166 15:52:58 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:47.166 15:52:58 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:47.166 15:52:58 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:47.166 15:52:58 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:47.166 15:52:58 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:47.166 15:52:58 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:47.166 15:52:58 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:47.166 15:52:58 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:47.166 15:52:58 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:47.166 15:52:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:47.166 15:52:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:47.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.166 15:52:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64587 00:10:47.166 15:52:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:47.166 15:52:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64587 00:10:47.166 15:52:58 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:47.166 15:52:58 -- common/autotest_common.sh@829 -- # '[' -z 64587 ']' 00:10:47.166 15:52:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.166 15:52:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:47.166 15:52:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.166 15:52:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:47.166 15:52:58 -- common/autotest_common.sh@10 -- # set +x 00:10:47.166 [2024-11-29 15:52:58.510452] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:47.166 [2024-11-29 15:52:58.510590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64587 ] 00:10:47.427 [2024-11-29 15:52:58.675194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.689 [2024-11-29 15:52:58.921920] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:47.689 [2024-11-29 15:52:58.922481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.689 [2024-11-29 15:52:58.922701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.689 [2024-11-29 15:52:58.922967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.689 [2024-11-29 15:52:58.923012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.633 15:53:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.633 15:53:00 -- common/autotest_common.sh@862 -- # return 0 00:10:48.633 15:53:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:10:48.633 15:53:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.633 15:53:00 -- common/autotest_common.sh@10 -- # set +x 00:10:48.895 nvme0n1 00:10:48.895 15:53:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.895 15:53:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:48.896 15:53:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_GwtZd.txt 00:10:48.896 15:53:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:48.896 15:53:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.896 15:53:00 -- common/autotest_common.sh@10 -- # set +x 00:10:48.896 true 00:10:48.896 15:53:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.896 15:53:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:48.896 15:53:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732895580 00:10:48.896 15:53:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64618 00:10:48.896 15:53:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:48.896 15:53:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:48.896 15:53:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:50.801 15:53:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.801 15:53:02 -- common/autotest_common.sh@10 -- # set +x 00:10:50.801 [2024-11-29 15:53:02.109208] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:50.801 [2024-11-29 15:53:02.109399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:50.801 [2024-11-29 15:53:02.109418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:50.801 [2024-11-29 15:53:02.109428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.801 [2024-11-29 15:53:02.110647] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:50.801 15:53:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64618 00:10:50.801 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64618 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64618 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:50.801 15:53:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.801 15:53:02 -- common/autotest_common.sh@10 -- # set +x 00:10:50.801 15:53:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_GwtZd.txt 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_GwtZd.txt 00:10:50.801 15:53:02 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64587 00:10:50.801 15:53:02 -- common/autotest_common.sh@936 -- # '[' -z 64587 ']' 00:10:50.801 15:53:02 -- common/autotest_common.sh@940 -- # kill -0 64587 00:10:50.801 15:53:02 -- common/autotest_common.sh@941 -- # uname 00:10:50.801 15:53:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:50.801 15:53:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64587 00:10:50.801 killing process with pid 64587 00:10:50.801 15:53:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:50.801 15:53:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:50.801 15:53:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64587' 00:10:50.801 15:53:02 -- common/autotest_common.sh@955 -- # kill 64587 00:10:50.801 15:53:02 -- common/autotest_common.sh@960 -- # wait 64587 00:10:52.179 15:53:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:52.179 15:53:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:52.179 00:10:52.179 real 0m5.144s 00:10:52.179 user 0m17.918s 00:10:52.179 sys 0m0.600s 00:10:52.179 ************************************ 00:10:52.179 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:52.179 ************************************ 00:10:52.179 15:53:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:52.179 15:53:03 -- common/autotest_common.sh@10 -- # set +x 00:10:52.179 15:53:03 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:52.179 15:53:03 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:52.179 15:53:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:52.179 15:53:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:52.179 15:53:03 -- common/autotest_common.sh@10 -- # set +x 00:10:52.179 ************************************ 00:10:52.179 START TEST nvme_fio 00:10:52.179 ************************************ 00:10:52.179 15:53:03 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:10:52.179 15:53:03 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:52.179 15:53:03 -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:52.179 15:53:03 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:52.179 15:53:03 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:52.179 15:53:03 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:52.179 15:53:03 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:52.179 15:53:03 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:52.179 15:53:03 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:52.179 15:53:03 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:52.179 15:53:03 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:52.179 15:53:03 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:10:52.179 15:53:03 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:52.179 15:53:03 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:52.179 15:53:03 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:52.179 15:53:03 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:52.438 15:53:03 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:52.438 15:53:03 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:52.699 15:53:03 -- nvme/nvme.sh@41 -- # bs=4096 00:10:52.699 15:53:03 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:52.699 15:53:03 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:52.699 15:53:03 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:52.699 15:53:03 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:52.699 15:53:03 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:52.699 15:53:03 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:52.699 15:53:03 -- common/autotest_common.sh@1330 -- # shift 00:10:52.699 15:53:03 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:52.699 15:53:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:52.699 15:53:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:52.699 15:53:03 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:52.699 15:53:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:52.699 15:53:03 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:52.699 15:53:03 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:52.699 15:53:03 -- common/autotest_common.sh@1336 -- # break 00:10:52.699 15:53:03 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:52.699 15:53:03 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:52.699 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:52.699 fio-3.35 00:10:52.699 Starting 1 thread 00:10:57.998 00:10:57.998 test: (groupid=0, jobs=1): err= 0: pid=64759: Fri Nov 29 15:53:08 2024 00:10:57.998 read: IOPS=22.5k, BW=87.8MiB/s (92.1MB/s)(176MiB/2001msec) 00:10:57.998 slat (nsec): min=3289, max=78757, avg=5151.71, stdev=2530.98 00:10:57.998 clat (usec): min=320, max=7718, avg=2835.63, stdev=984.90 00:10:57.998 lat (usec): min=324, max=7731, avg=2840.78, stdev=986.38 00:10:57.998 clat percentiles (usec): 00:10:57.998 | 1.00th=[ 1565], 5.00th=[ 2073], 10.00th=[ 2245], 20.00th=[ 2311], 00:10:57.998 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:10:57.998 | 70.00th=[ 2704], 80.00th=[ 3032], 90.00th=[ 4424], 95.00th=[ 5407], 00:10:57.998 | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[ 7177], 99.95th=[ 7308], 00:10:57.998 | 99.99th=[ 7635] 00:10:57.998 bw ( KiB/s): min=83168, max=99128, per=100.00%, avg=90194.67, stdev=8149.04, samples=3 00:10:57.998 iops : min=20792, max=24782, avg=22548.67, stdev=2037.26, samples=3 00:10:57.998 write: IOPS=22.4k, BW=87.3MiB/s (91.6MB/s)(175MiB/2001msec); 0 zone resets 00:10:57.998 slat (usec): min=3, max=279, avg= 5.32, stdev= 2.77 00:10:57.998 clat (usec): min=304, max=8975, avg=2848.92, stdev=993.52 00:10:57.998 lat (usec): min=309, max=8979, avg=2854.24, stdev=994.95 00:10:57.998 clat percentiles (usec): 00:10:57.998 | 1.00th=[ 1565], 5.00th=[ 2073], 10.00th=[ 2245], 20.00th=[ 2343], 00:10:57.998 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:10:57.998 | 70.00th=[ 2704], 80.00th=[ 3064], 90.00th=[ 4490], 95.00th=[ 5407], 00:10:57.998 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 7242], 99.95th=[ 7504], 00:10:57.998 | 99.99th=[ 7963] 00:10:57.998 bw ( KiB/s): min=83184, max=100096, per=100.00%, avg=90426.67, stdev=8713.24, samples=3 00:10:57.998 iops : min=20796, max=25024, avg=22606.67, stdev=2178.31, samples=3 00:10:57.998 lat (usec) : 500=0.03%, 750=0.03%, 1000=0.08% 00:10:57.998 lat (msec) : 2=3.45%, 4=84.26%, 10=12.15% 00:10:57.998 cpu : usr=99.15%, sys=0.10%, ctx=3, majf=0, minf=608 00:10:57.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:57.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.998 issued rwts: total=44993,44728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.998 00:10:57.998 Run status group 0 (all jobs): 00:10:57.998 READ: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:10:57.998 WRITE: bw=87.3MiB/s (91.6MB/s), 87.3MiB/s-87.3MiB/s (91.6MB/s-91.6MB/s), io=175MiB (183MB), run=2001-2001msec 00:10:57.998 ----------------------------------------------------- 00:10:57.998 Suppressions used: 00:10:57.998 count bytes template 00:10:57.998 1 32 /usr/src/fio/parse.c 00:10:57.998 1 8 libtcmalloc_minimal.so 00:10:57.998 ----------------------------------------------------- 00:10:57.998 00:10:57.998 15:53:09 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:57.998 15:53:09 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:57.998 15:53:09 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:57.998 15:53:09 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:57.998 15:53:09 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:57.998 15:53:09 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:58.259 15:53:09 -- nvme/nvme.sh@41 -- # bs=4096 00:10:58.259 15:53:09 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:58.259 15:53:09 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:58.259 15:53:09 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:58.259 15:53:09 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:58.259 15:53:09 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:58.259 15:53:09 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:58.259 15:53:09 -- common/autotest_common.sh@1330 -- # shift 00:10:58.259 15:53:09 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:58.259 15:53:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:58.259 15:53:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:58.259 15:53:09 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:58.259 15:53:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:58.259 15:53:09 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:58.259 15:53:09 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:58.259 15:53:09 -- common/autotest_common.sh@1336 -- # break 00:10:58.259 15:53:09 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:58.259 15:53:09 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:58.520 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:58.520 fio-3.35 00:10:58.520 Starting 1 thread 00:11:03.808 00:11:03.809 test: (groupid=0, jobs=1): err= 0: pid=64826: Fri Nov 29 15:53:15 2024 00:11:03.809 read: IOPS=18.2k, BW=71.3MiB/s (74.7MB/s)(143MiB/2001msec) 00:11:03.809 slat (nsec): min=4287, max=78180, avg=5964.86, stdev=2942.61 00:11:03.809 clat (usec): min=206, max=12553, avg=3481.44, stdev=1220.28 00:11:03.809 lat (usec): min=211, max=12631, avg=3487.41, stdev=1221.59 00:11:03.809 clat percentiles (usec): 00:11:03.809 | 1.00th=[ 1991], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2638], 00:11:03.809 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 3032], 60.00th=[ 3195], 00:11:03.809 | 70.00th=[ 3490], 80.00th=[ 4228], 90.00th=[ 5407], 95.00th=[ 6259], 00:11:03.809 | 99.00th=[ 7373], 99.50th=[ 7701], 99.90th=[ 8848], 99.95th=[10159], 00:11:03.809 | 99.99th=[12256] 00:11:03.809 bw ( KiB/s): min=63328, max=80232, per=100.00%, avg=73541.33, stdev=8985.72, samples=3 00:11:03.809 iops : min=15832, max=20058, avg=18385.33, stdev=2246.43, samples=3 00:11:03.809 write: IOPS=18.3k, BW=71.3MiB/s (74.8MB/s)(143MiB/2001msec); 0 zone resets 00:11:03.809 slat (nsec): min=4392, max=88712, avg=6127.87, stdev=3055.38 00:11:03.809 clat (usec): min=214, max=12382, avg=3503.39, stdev=1222.08 00:11:03.809 lat (usec): min=219, max=12397, avg=3509.51, stdev=1223.40 00:11:03.809 clat percentiles (usec): 00:11:03.809 | 1.00th=[ 2008], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2671], 00:11:03.809 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 3064], 60.00th=[ 3228], 00:11:03.809 | 70.00th=[ 3523], 80.00th=[ 4228], 90.00th=[ 5473], 95.00th=[ 6259], 00:11:03.809 | 99.00th=[ 7373], 99.50th=[ 7767], 99.90th=[ 8848], 99.95th=[10421], 00:11:03.809 | 99.99th=[12125] 00:11:03.809 bw ( KiB/s): min=63584, max=80320, per=100.00%, avg=73461.33, stdev=8766.85, samples=3 00:11:03.809 iops : min=15896, max=20080, avg=18365.33, stdev=2191.71, samples=3 00:11:03.809 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:11:03.809 lat (msec) : 2=0.96%, 4=76.80%, 10=22.15%, 20=0.06% 00:11:03.809 cpu : usr=98.85%, sys=0.15%, ctx=7, majf=0, minf=608 00:11:03.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:03.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.809 issued rwts: total=36506,36527,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.809 00:11:03.809 Run status group 0 (all jobs): 00:11:03.809 READ: bw=71.3MiB/s (74.7MB/s), 71.3MiB/s-71.3MiB/s (74.7MB/s-74.7MB/s), io=143MiB (150MB), run=2001-2001msec 00:11:03.809 WRITE: bw=71.3MiB/s (74.8MB/s), 71.3MiB/s-71.3MiB/s (74.8MB/s-74.8MB/s), io=143MiB (150MB), run=2001-2001msec 00:11:04.070 ----------------------------------------------------- 00:11:04.070 Suppressions used: 00:11:04.070 count bytes template 00:11:04.070 1 32 /usr/src/fio/parse.c 00:11:04.070 1 8 libtcmalloc_minimal.so 00:11:04.070 ----------------------------------------------------- 00:11:04.070 00:11:04.070 15:53:15 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:04.070 15:53:15 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:04.070 15:53:15 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:04.070 15:53:15 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:04.330 15:53:15 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:04.330 15:53:15 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:04.592 15:53:15 -- nvme/nvme.sh@41 -- # bs=4096 00:11:04.592 15:53:15 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:04.592 15:53:15 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:04.592 15:53:15 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:04.592 15:53:15 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:04.592 15:53:15 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:04.592 15:53:15 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:04.592 15:53:15 -- common/autotest_common.sh@1330 -- # shift 00:11:04.592 15:53:15 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:04.592 15:53:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:04.592 15:53:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:04.592 15:53:15 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:04.592 15:53:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:04.592 15:53:15 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:04.592 15:53:15 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:04.592 15:53:15 -- common/autotest_common.sh@1336 -- # break 00:11:04.592 15:53:15 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:04.592 15:53:15 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:04.592 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:04.592 fio-3.35 00:11:04.592 Starting 1 thread 00:11:11.178 00:11:11.178 test: (groupid=0, jobs=1): err= 0: pid=64914: Fri Nov 29 15:53:21 2024 00:11:11.178 read: IOPS=17.3k, BW=67.5MiB/s (70.8MB/s)(135MiB/2001msec) 00:11:11.178 slat (nsec): min=4212, max=82276, avg=5895.86, stdev=3241.68 00:11:11.178 clat (usec): min=1156, max=12858, avg=3673.01, stdev=1279.58 00:11:11.178 lat (usec): min=1160, max=12907, avg=3678.90, stdev=1280.83 00:11:11.178 clat percentiles (usec): 00:11:11.178 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2638], 00:11:11.178 | 30.00th=[ 2802], 40.00th=[ 2966], 50.00th=[ 3163], 60.00th=[ 3523], 00:11:11.178 | 70.00th=[ 4146], 80.00th=[ 4817], 90.00th=[ 5538], 95.00th=[ 6194], 00:11:11.178 | 99.00th=[ 7439], 99.50th=[ 8029], 99.90th=[ 9372], 99.95th=[10945], 00:11:11.178 | 99.99th=[12780] 00:11:11.178 bw ( KiB/s): min=56392, max=73872, per=97.34%, avg=67258.67, stdev=9484.50, samples=3 00:11:11.178 iops : min=14098, max=18468, avg=16814.67, stdev=2371.12, samples=3 00:11:11.178 write: IOPS=17.3k, BW=67.5MiB/s (70.8MB/s)(135MiB/2001msec); 0 zone resets 00:11:11.178 slat (nsec): min=4273, max=73660, avg=6014.97, stdev=3269.90 00:11:11.178 clat (usec): min=1155, max=12793, avg=3709.19, stdev=1291.58 00:11:11.178 lat (usec): min=1160, max=12805, avg=3715.20, stdev=1292.86 00:11:11.178 clat percentiles (usec): 00:11:11.178 | 1.00th=[ 2114], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2671], 00:11:11.178 | 30.00th=[ 2835], 40.00th=[ 2999], 50.00th=[ 3195], 60.00th=[ 3556], 00:11:11.178 | 70.00th=[ 4146], 80.00th=[ 4817], 90.00th=[ 5604], 95.00th=[ 6259], 00:11:11.178 | 99.00th=[ 7504], 99.50th=[ 8160], 99.90th=[ 9503], 99.95th=[11469], 00:11:11.178 | 99.99th=[12649] 00:11:11.178 bw ( KiB/s): min=56656, max=74016, per=97.13%, avg=67178.67, stdev=9248.17, samples=3 00:11:11.178 iops : min=14164, max=18504, avg=16794.67, stdev=2312.04, samples=3 00:11:11.178 lat (msec) : 2=0.67%, 4=67.18%, 10=32.06%, 20=0.09% 00:11:11.178 cpu : usr=98.75%, sys=0.15%, ctx=24, majf=0, minf=608 00:11:11.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:11.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.178 issued rwts: total=34565,34598,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.178 00:11:11.178 Run status group 0 (all jobs): 00:11:11.178 READ: bw=67.5MiB/s (70.8MB/s), 67.5MiB/s-67.5MiB/s (70.8MB/s-70.8MB/s), io=135MiB (142MB), run=2001-2001msec 00:11:11.178 WRITE: bw=67.5MiB/s (70.8MB/s), 67.5MiB/s-67.5MiB/s (70.8MB/s-70.8MB/s), io=135MiB (142MB), run=2001-2001msec 00:11:11.178 ----------------------------------------------------- 00:11:11.178 Suppressions used: 00:11:11.178 count bytes template 00:11:11.178 1 32 /usr/src/fio/parse.c 00:11:11.178 1 8 libtcmalloc_minimal.so 00:11:11.178 ----------------------------------------------------- 00:11:11.178 00:11:11.178 15:53:21 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:11.178 15:53:21 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:11.178 15:53:21 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:11.178 15:53:21 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:11.178 15:53:21 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:11.178 15:53:21 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:11.178 15:53:22 -- nvme/nvme.sh@41 -- # bs=4096 00:11:11.178 15:53:22 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:11.178 15:53:22 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:11.178 15:53:22 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:11.178 15:53:22 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:11.178 15:53:22 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:11.178 15:53:22 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:11.178 15:53:22 -- common/autotest_common.sh@1330 -- # shift 00:11:11.178 15:53:22 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:11.179 15:53:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:11.179 15:53:22 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:11.179 15:53:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:11.179 15:53:22 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:11.179 15:53:22 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:11.179 15:53:22 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:11.179 15:53:22 -- common/autotest_common.sh@1336 -- # break 00:11:11.179 15:53:22 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:11.179 15:53:22 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:11.179 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:11.179 fio-3.35 00:11:11.179 Starting 1 thread 00:11:17.774 00:11:17.774 test: (groupid=0, jobs=1): err= 0: pid=64998: Fri Nov 29 15:53:29 2024 00:11:17.774 read: IOPS=15.3k, BW=59.7MiB/s (62.6MB/s)(119MiB/2001msec) 00:11:17.774 slat (usec): min=4, max=108, avg= 6.75, stdev= 3.92 00:11:17.774 clat (usec): min=213, max=10699, avg=4154.06, stdev=1452.47 00:11:17.774 lat (usec): min=218, max=10708, avg=4160.80, stdev=1453.82 00:11:17.774 clat percentiles (usec): 00:11:17.774 | 1.00th=[ 2311], 5.00th=[ 2638], 10.00th=[ 2769], 20.00th=[ 2933], 00:11:17.774 | 30.00th=[ 3097], 40.00th=[ 3294], 50.00th=[ 3556], 60.00th=[ 4080], 00:11:17.774 | 70.00th=[ 4883], 80.00th=[ 5538], 90.00th=[ 6325], 95.00th=[ 6915], 00:11:17.774 | 99.00th=[ 8160], 99.50th=[ 8848], 99.90th=[10028], 99.95th=[10290], 00:11:17.774 | 99.99th=[10552] 00:11:17.774 bw ( KiB/s): min=54600, max=67064, per=100.00%, avg=61141.33, stdev=6254.99, samples=3 00:11:17.774 iops : min=13650, max=16766, avg=15285.33, stdev=1563.75, samples=3 00:11:17.774 write: IOPS=15.3k, BW=59.8MiB/s (62.7MB/s)(120MiB/2001msec); 0 zone resets 00:11:17.774 slat (nsec): min=4908, max=97318, avg=6888.38, stdev=4043.12 00:11:17.774 clat (usec): min=292, max=10623, avg=4184.21, stdev=1453.43 00:11:17.774 lat (usec): min=298, max=10638, avg=4191.10, stdev=1454.85 00:11:17.774 clat percentiles (usec): 00:11:17.774 | 1.00th=[ 2311], 5.00th=[ 2638], 10.00th=[ 2769], 20.00th=[ 2966], 00:11:17.774 | 30.00th=[ 3130], 40.00th=[ 3326], 50.00th=[ 3589], 60.00th=[ 4146], 00:11:17.774 | 70.00th=[ 4883], 80.00th=[ 5538], 90.00th=[ 6325], 95.00th=[ 6980], 00:11:17.774 | 99.00th=[ 8094], 99.50th=[ 8586], 99.90th=[ 9896], 99.95th=[10159], 00:11:17.774 | 99.99th=[10552] 00:11:17.774 bw ( KiB/s): min=54856, max=66632, per=99.14%, avg=60674.67, stdev=5889.22, samples=3 00:11:17.774 iops : min=13714, max=16658, avg=15168.67, stdev=1472.31, samples=3 00:11:17.774 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:11:17.774 lat (msec) : 2=0.43%, 4=57.99%, 10=41.42%, 20=0.10% 00:11:17.774 cpu : usr=98.50%, sys=0.10%, ctx=5, majf=0, minf=606 00:11:17.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:17.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.774 issued rwts: total=30572,30615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.774 00:11:17.774 Run status group 0 (all jobs): 00:11:17.774 READ: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=119MiB (125MB), run=2001-2001msec 00:11:17.774 WRITE: bw=59.8MiB/s (62.7MB/s), 59.8MiB/s-59.8MiB/s (62.7MB/s-62.7MB/s), io=120MiB (125MB), run=2001-2001msec 00:11:18.036 ----------------------------------------------------- 00:11:18.036 Suppressions used: 00:11:18.036 count bytes template 00:11:18.036 1 32 /usr/src/fio/parse.c 00:11:18.036 1 8 libtcmalloc_minimal.so 00:11:18.036 ----------------------------------------------------- 00:11:18.036 00:11:18.036 15:53:29 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:18.036 15:53:29 -- nvme/nvme.sh@46 -- # true 00:11:18.036 00:11:18.036 real 0m25.947s 00:11:18.036 user 0m16.035s 00:11:18.036 sys 0m17.837s 00:11:18.036 ************************************ 00:11:18.036 END TEST nvme_fio 00:11:18.036 ************************************ 00:11:18.036 15:53:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:18.036 15:53:29 -- common/autotest_common.sh@10 -- # set +x 00:11:18.036 00:11:18.036 real 1m40.220s 00:11:18.036 user 3m40.965s 00:11:18.036 sys 0m28.198s 00:11:18.037 ************************************ 00:11:18.037 END TEST nvme 00:11:18.037 15:53:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:18.037 15:53:29 -- common/autotest_common.sh@10 -- # set +x 00:11:18.037 ************************************ 00:11:18.299 15:53:29 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:11:18.299 15:53:29 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:18.299 15:53:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:18.299 15:53:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.299 15:53:29 -- common/autotest_common.sh@10 -- # set +x 00:11:18.299 ************************************ 00:11:18.299 START TEST nvme_scc 00:11:18.299 ************************************ 00:11:18.299 15:53:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:18.299 * Looking for test storage... 00:11:18.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:18.299 15:53:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:18.299 15:53:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:18.299 15:53:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:18.299 15:53:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:18.299 15:53:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:18.299 15:53:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:18.299 15:53:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:18.299 15:53:29 -- scripts/common.sh@335 -- # IFS=.-: 00:11:18.299 15:53:29 -- scripts/common.sh@335 -- # read -ra ver1 00:11:18.299 15:53:29 -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.299 15:53:29 -- scripts/common.sh@336 -- # read -ra ver2 00:11:18.299 15:53:29 -- scripts/common.sh@337 -- # local 'op=<' 00:11:18.299 15:53:29 -- scripts/common.sh@339 -- # ver1_l=2 00:11:18.299 15:53:29 -- scripts/common.sh@340 -- # ver2_l=1 00:11:18.299 15:53:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:18.299 15:53:29 -- scripts/common.sh@343 -- # case "$op" in 00:11:18.299 15:53:29 -- scripts/common.sh@344 -- # : 1 00:11:18.299 15:53:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:18.299 15:53:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.299 15:53:29 -- scripts/common.sh@364 -- # decimal 1 00:11:18.299 15:53:29 -- scripts/common.sh@352 -- # local d=1 00:11:18.299 15:53:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.299 15:53:29 -- scripts/common.sh@354 -- # echo 1 00:11:18.299 15:53:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:18.299 15:53:29 -- scripts/common.sh@365 -- # decimal 2 00:11:18.299 15:53:29 -- scripts/common.sh@352 -- # local d=2 00:11:18.299 15:53:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.299 15:53:29 -- scripts/common.sh@354 -- # echo 2 00:11:18.299 15:53:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:18.299 15:53:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:18.299 15:53:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:18.299 15:53:29 -- scripts/common.sh@367 -- # return 0 00:11:18.299 15:53:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.299 15:53:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:18.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.299 --rc genhtml_branch_coverage=1 00:11:18.299 --rc genhtml_function_coverage=1 00:11:18.299 --rc genhtml_legend=1 00:11:18.299 --rc geninfo_all_blocks=1 00:11:18.299 --rc geninfo_unexecuted_blocks=1 00:11:18.299 00:11:18.299 ' 00:11:18.299 15:53:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:18.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.299 --rc genhtml_branch_coverage=1 00:11:18.299 --rc genhtml_function_coverage=1 00:11:18.299 --rc genhtml_legend=1 00:11:18.299 --rc geninfo_all_blocks=1 00:11:18.299 --rc geninfo_unexecuted_blocks=1 00:11:18.299 00:11:18.299 ' 00:11:18.299 15:53:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:18.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.299 --rc genhtml_branch_coverage=1 00:11:18.299 --rc genhtml_function_coverage=1 00:11:18.299 --rc genhtml_legend=1 00:11:18.299 --rc geninfo_all_blocks=1 00:11:18.299 --rc geninfo_unexecuted_blocks=1 00:11:18.299 00:11:18.299 ' 00:11:18.299 15:53:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:18.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.299 --rc genhtml_branch_coverage=1 00:11:18.299 --rc genhtml_function_coverage=1 00:11:18.299 --rc genhtml_legend=1 00:11:18.299 --rc geninfo_all_blocks=1 00:11:18.299 --rc geninfo_unexecuted_blocks=1 00:11:18.299 00:11:18.299 ' 00:11:18.299 15:53:29 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:18.299 15:53:29 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:18.299 15:53:29 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:18.299 15:53:29 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:18.299 15:53:29 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:18.299 15:53:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.299 15:53:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.299 15:53:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.299 15:53:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.299 15:53:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.300 15:53:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.300 15:53:29 -- paths/export.sh@5 -- # export PATH 00:11:18.300 15:53:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.300 15:53:29 -- nvme/functions.sh@10 -- # ctrls=() 00:11:18.300 15:53:29 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:18.300 15:53:29 -- nvme/functions.sh@11 -- # nvmes=() 00:11:18.300 15:53:29 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:18.300 15:53:29 -- nvme/functions.sh@12 -- # bdfs=() 00:11:18.300 15:53:29 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:18.300 15:53:29 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:18.300 15:53:29 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:18.300 15:53:29 -- nvme/functions.sh@14 -- # nvme_name= 00:11:18.300 15:53:29 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:18.300 15:53:29 -- nvme/nvme_scc.sh@12 -- # uname 00:11:18.300 15:53:29 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:18.300 15:53:29 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:18.300 15:53:29 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:18.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:18.873 Waiting for block devices as requested 00:11:18.873 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.873 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:19.135 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:19.135 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:24.431 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:24.431 15:53:35 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:24.431 15:53:35 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:24.431 15:53:35 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:24.431 15:53:35 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:24.431 15:53:35 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:24.431 15:53:35 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:24.431 15:53:35 -- scripts/common.sh@15 -- # local i 00:11:24.431 15:53:35 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:24.431 15:53:35 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:24.431 15:53:35 -- scripts/common.sh@24 -- # return 0 00:11:24.431 15:53:35 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:24.431 15:53:35 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:24.431 15:53:35 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:24.431 15:53:35 -- nvme/functions.sh@18 -- # shift 00:11:24.431 15:53:35 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.431 15:53:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:24.431 15:53:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.431 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:24.431 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:24.431 15:53:35 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.431 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:24.431 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:24.431 15:53:35 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.431 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:24.431 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:24.431 15:53:35 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.431 15:53:35 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:24.431 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:24.431 15:53:35 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.431 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:24.431 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:24.431 15:53:35 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.431 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:24.431 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:24.431 15:53:35 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.431 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.432 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:24.432 15:53:35 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.432 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.433 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:24.433 15:53:35 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:24.433 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.434 15:53:35 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:24.434 15:53:35 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.434 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:24.435 15:53:35 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:24.435 15:53:35 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:24.435 15:53:35 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:24.435 15:53:35 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:24.435 15:53:35 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:24.435 15:53:35 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:24.435 15:53:35 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:24.435 15:53:35 -- scripts/common.sh@15 -- # local i 00:11:24.435 15:53:35 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:24.435 15:53:35 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:24.435 15:53:35 -- scripts/common.sh@24 -- # return 0 00:11:24.435 15:53:35 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:24.435 15:53:35 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:24.435 15:53:35 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@18 -- # shift 00:11:24.435 15:53:35 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.435 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:24.435 15:53:35 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.435 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.436 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.436 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.436 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:24.437 15:53:35 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.437 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.437 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.438 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:24.438 15:53:35 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:24.438 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:24.439 15:53:35 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:24.439 15:53:35 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:24.439 15:53:35 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:24.439 15:53:35 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@18 -- # shift 00:11:24.439 15:53:35 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:24.439 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.439 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.439 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:24.440 15:53:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.440 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.440 15:53:35 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:24.441 15:53:35 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:24.441 15:53:35 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:24.441 15:53:35 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:24.441 15:53:35 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@18 -- # shift 00:11:24.441 15:53:35 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.441 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.441 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:24.441 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:24.442 15:53:35 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:24.442 15:53:35 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:24.442 15:53:35 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:24.442 15:53:35 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@18 -- # shift 00:11:24.442 15:53:35 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.442 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.442 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.442 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.443 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.443 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.443 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.444 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:24.444 15:53:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.444 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:24.445 15:53:35 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:24.445 15:53:35 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:24.445 15:53:35 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:24.445 15:53:35 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:24.445 15:53:35 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:24.445 15:53:35 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:24.445 15:53:35 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:24.445 15:53:35 -- scripts/common.sh@15 -- # local i 00:11:24.445 15:53:35 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:24.445 15:53:35 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:24.445 15:53:35 -- scripts/common.sh@24 -- # return 0 00:11:24.445 15:53:35 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:24.445 15:53:35 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:24.445 15:53:35 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@18 -- # shift 00:11:24.445 15:53:35 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.445 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:24.445 15:53:35 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:24.445 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.446 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.446 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.446 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.447 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.447 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.447 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:24.448 15:53:35 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:24.448 15:53:35 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:24.448 15:53:35 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:24.448 15:53:35 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@18 -- # shift 00:11:24.448 15:53:35 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:24.448 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.448 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.448 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.449 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:24.449 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:24.449 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:24.450 15:53:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.450 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:24.450 15:53:35 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:24.450 15:53:35 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:24.450 15:53:35 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:24.450 15:53:35 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:24.450 15:53:35 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:24.450 15:53:35 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:24.450 15:53:35 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:24.450 15:53:35 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:24.450 15:53:35 -- scripts/common.sh@15 -- # local i 00:11:24.450 15:53:35 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:24.450 15:53:35 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:24.450 15:53:35 -- scripts/common.sh@24 -- # return 0 00:11:24.450 15:53:35 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:24.450 15:53:35 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:24.450 15:53:35 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:24.450 15:53:35 -- nvme/functions.sh@18 -- # shift 00:11:24.451 15:53:35 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.451 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:24.451 15:53:35 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:24.451 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.452 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:24.452 15:53:35 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:24.452 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.453 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:24.453 15:53:35 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.453 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:24.454 15:53:35 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:24.454 15:53:35 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:24.454 15:53:35 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:24.454 15:53:35 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@18 -- # shift 00:11:24.454 15:53:35 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.454 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:24.454 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:24.454 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:24.455 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.455 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.455 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.456 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.456 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.456 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.456 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.456 15:53:35 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.456 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.456 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.456 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.456 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.456 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.456 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.456 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.456 15:53:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:24.456 15:53:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.456 15:53:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.456 15:53:35 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:24.456 15:53:35 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:24.456 15:53:35 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:24.456 15:53:35 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:24.456 15:53:35 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:24.456 15:53:35 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:24.456 15:53:35 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:24.456 15:53:35 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:24.456 15:53:35 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:24.456 15:53:35 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:24.456 15:53:35 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:24.456 15:53:35 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:24.456 15:53:35 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:24.456 15:53:35 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:24.456 15:53:35 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:24.456 15:53:35 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:24.456 15:53:35 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:24.456 15:53:35 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:24.456 15:53:35 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:24.456 15:53:35 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:24.456 15:53:35 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:24.456 15:53:35 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:24.456 15:53:35 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:24.456 15:53:35 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:24.456 15:53:35 -- nvme/functions.sh@197 -- # echo nvme1 00:11:24.456 15:53:35 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:24.456 15:53:35 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:24.456 15:53:35 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:24.456 15:53:35 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:24.456 15:53:35 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:24.456 15:53:35 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:24.456 15:53:35 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:24.456 15:53:35 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:24.456 15:53:35 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:24.456 15:53:35 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:24.456 15:53:35 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:24.456 15:53:35 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:24.456 15:53:35 -- nvme/functions.sh@197 -- # echo nvme0 00:11:24.456 15:53:35 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:24.457 15:53:35 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:24.457 15:53:35 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:24.457 15:53:35 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:24.457 15:53:35 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:24.457 15:53:35 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:24.457 15:53:35 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:24.457 15:53:35 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:24.457 15:53:35 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:24.457 15:53:35 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:24.457 15:53:35 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:24.457 15:53:35 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:24.457 15:53:35 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:24.457 15:53:35 -- nvme/functions.sh@197 -- # echo nvme3 00:11:24.457 15:53:35 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:24.457 15:53:35 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:24.457 15:53:35 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:24.457 15:53:35 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:24.457 15:53:35 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:24.457 15:53:35 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:24.457 15:53:35 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:24.457 15:53:35 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:24.457 15:53:35 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:24.457 15:53:35 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:24.457 15:53:35 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:24.457 15:53:35 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:24.457 15:53:35 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:24.457 15:53:35 -- nvme/functions.sh@197 -- # echo nvme2 00:11:24.457 15:53:35 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:24.457 15:53:35 -- nvme/functions.sh@206 -- # echo nvme1 00:11:24.457 15:53:35 -- nvme/functions.sh@207 -- # return 0 00:11:24.457 15:53:35 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:24.457 15:53:35 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:11:24.457 15:53:35 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:25.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:25.657 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.657 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.657 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.657 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.657 15:53:36 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:25.657 15:53:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:25.657 15:53:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:25.657 15:53:36 -- common/autotest_common.sh@10 -- # set +x 00:11:25.657 ************************************ 00:11:25.657 START TEST nvme_simple_copy 00:11:25.657 ************************************ 00:11:25.657 15:53:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:25.918 Initializing NVMe Controllers 00:11:25.918 Attaching to 0000:00:08.0 00:11:25.918 Controller supports SCC. Attached to 0000:00:08.0 00:11:25.918 Namespace ID: 1 size: 4GB 00:11:25.918 Initialization complete. 00:11:25.918 00:11:25.918 Controller QEMU NVMe Ctrl (12342 ) 00:11:25.918 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:25.918 Namespace Block Size:4096 00:11:25.918 Writing LBAs 0 to 63 with Random Data 00:11:25.918 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:25.918 LBAs matching Written Data: 64 00:11:25.918 00:11:25.918 real 0m0.274s 00:11:25.918 user 0m0.093s 00:11:25.918 sys 0m0.078s 00:11:25.918 15:53:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:25.918 ************************************ 00:11:25.918 END TEST nvme_simple_copy 00:11:25.918 ************************************ 00:11:25.918 15:53:37 -- common/autotest_common.sh@10 -- # set +x 00:11:25.918 00:11:25.918 real 0m7.836s 00:11:25.918 user 0m1.053s 00:11:25.918 sys 0m1.545s 00:11:25.918 15:53:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:25.918 15:53:37 -- common/autotest_common.sh@10 -- # set +x 00:11:25.918 ************************************ 00:11:25.918 END TEST nvme_scc 00:11:25.918 ************************************ 00:11:26.179 15:53:37 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:11:26.179 15:53:37 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:26.179 15:53:37 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:11:26.179 15:53:37 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:11:26.179 15:53:37 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:26.179 15:53:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:26.179 15:53:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:26.179 15:53:37 -- common/autotest_common.sh@10 -- # set +x 00:11:26.179 ************************************ 00:11:26.179 START TEST nvme_fdp 00:11:26.179 ************************************ 00:11:26.179 15:53:37 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:11:26.179 * Looking for test storage... 00:11:26.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:26.179 15:53:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:26.179 15:53:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:26.179 15:53:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:26.179 15:53:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:26.179 15:53:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:26.179 15:53:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:26.179 15:53:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:26.179 15:53:37 -- scripts/common.sh@335 -- # IFS=.-: 00:11:26.179 15:53:37 -- scripts/common.sh@335 -- # read -ra ver1 00:11:26.179 15:53:37 -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.179 15:53:37 -- scripts/common.sh@336 -- # read -ra ver2 00:11:26.179 15:53:37 -- scripts/common.sh@337 -- # local 'op=<' 00:11:26.179 15:53:37 -- scripts/common.sh@339 -- # ver1_l=2 00:11:26.179 15:53:37 -- scripts/common.sh@340 -- # ver2_l=1 00:11:26.179 15:53:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:26.179 15:53:37 -- scripts/common.sh@343 -- # case "$op" in 00:11:26.179 15:53:37 -- scripts/common.sh@344 -- # : 1 00:11:26.179 15:53:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:26.179 15:53:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.179 15:53:37 -- scripts/common.sh@364 -- # decimal 1 00:11:26.179 15:53:37 -- scripts/common.sh@352 -- # local d=1 00:11:26.179 15:53:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.179 15:53:37 -- scripts/common.sh@354 -- # echo 1 00:11:26.179 15:53:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:26.179 15:53:37 -- scripts/common.sh@365 -- # decimal 2 00:11:26.179 15:53:37 -- scripts/common.sh@352 -- # local d=2 00:11:26.179 15:53:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.179 15:53:37 -- scripts/common.sh@354 -- # echo 2 00:11:26.179 15:53:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:26.179 15:53:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:26.179 15:53:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:26.179 15:53:37 -- scripts/common.sh@367 -- # return 0 00:11:26.179 15:53:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.179 15:53:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:26.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.179 --rc genhtml_branch_coverage=1 00:11:26.179 --rc genhtml_function_coverage=1 00:11:26.179 --rc genhtml_legend=1 00:11:26.180 --rc geninfo_all_blocks=1 00:11:26.180 --rc geninfo_unexecuted_blocks=1 00:11:26.180 00:11:26.180 ' 00:11:26.180 15:53:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:26.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.180 --rc genhtml_branch_coverage=1 00:11:26.180 --rc genhtml_function_coverage=1 00:11:26.180 --rc genhtml_legend=1 00:11:26.180 --rc geninfo_all_blocks=1 00:11:26.180 --rc geninfo_unexecuted_blocks=1 00:11:26.180 00:11:26.180 ' 00:11:26.180 15:53:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:26.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.180 --rc genhtml_branch_coverage=1 00:11:26.180 --rc genhtml_function_coverage=1 00:11:26.180 --rc genhtml_legend=1 00:11:26.180 --rc geninfo_all_blocks=1 00:11:26.180 --rc geninfo_unexecuted_blocks=1 00:11:26.180 00:11:26.180 ' 00:11:26.180 15:53:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:26.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.180 --rc genhtml_branch_coverage=1 00:11:26.180 --rc genhtml_function_coverage=1 00:11:26.180 --rc genhtml_legend=1 00:11:26.180 --rc geninfo_all_blocks=1 00:11:26.180 --rc geninfo_unexecuted_blocks=1 00:11:26.180 00:11:26.180 ' 00:11:26.180 15:53:37 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:26.180 15:53:37 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:26.180 15:53:37 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:26.180 15:53:37 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:26.180 15:53:37 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:26.180 15:53:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.180 15:53:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.180 15:53:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.180 15:53:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.180 15:53:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.180 15:53:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.180 15:53:37 -- paths/export.sh@5 -- # export PATH 00:11:26.180 15:53:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.180 15:53:37 -- nvme/functions.sh@10 -- # ctrls=() 00:11:26.180 15:53:37 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:26.180 15:53:37 -- nvme/functions.sh@11 -- # nvmes=() 00:11:26.180 15:53:37 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:26.180 15:53:37 -- nvme/functions.sh@12 -- # bdfs=() 00:11:26.180 15:53:37 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:26.180 15:53:37 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:26.180 15:53:37 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:26.180 15:53:37 -- nvme/functions.sh@14 -- # nvme_name= 00:11:26.180 15:53:37 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:26.180 15:53:37 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:26.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:26.752 Waiting for block devices as requested 00:11:26.752 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:26.752 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:27.014 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:27.014 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:32.308 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:32.308 15:53:43 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:32.308 15:53:43 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:32.308 15:53:43 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:32.308 15:53:43 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:32.308 15:53:43 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:32.308 15:53:43 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:32.308 15:53:43 -- scripts/common.sh@15 -- # local i 00:11:32.308 15:53:43 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:32.308 15:53:43 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:32.308 15:53:43 -- scripts/common.sh@24 -- # return 0 00:11:32.308 15:53:43 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:32.308 15:53:43 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:32.308 15:53:43 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:32.308 15:53:43 -- nvme/functions.sh@18 -- # shift 00:11:32.308 15:53:43 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:32.308 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.308 15:53:43 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:32.308 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.308 15:53:43 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.308 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.308 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.308 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:32.308 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:32.308 15:53:43 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:32.308 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.308 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.308 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:32.308 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:32.308 15:53:43 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:32.308 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.308 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.309 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.309 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.309 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.310 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.310 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.310 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.311 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.311 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:32.311 15:53:43 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:32.312 15:53:43 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:32.312 15:53:43 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:32.312 15:53:43 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:32.312 15:53:43 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:32.312 15:53:43 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:32.312 15:53:43 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:32.312 15:53:43 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:32.312 15:53:43 -- scripts/common.sh@15 -- # local i 00:11:32.312 15:53:43 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:32.312 15:53:43 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:32.312 15:53:43 -- scripts/common.sh@24 -- # return 0 00:11:32.312 15:53:43 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:32.312 15:53:43 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:32.312 15:53:43 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@18 -- # shift 00:11:32.312 15:53:43 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.312 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:32.312 15:53:43 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:32.312 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.313 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.313 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:32.313 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:32.314 15:53:43 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.314 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.314 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.315 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.315 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:32.315 15:53:43 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:32.316 15:53:43 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:32.316 15:53:43 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:32.316 15:53:43 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:32.316 15:53:43 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@18 -- # shift 00:11:32.316 15:53:43 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.316 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:32.316 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.316 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.317 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.317 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:32.317 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:32.318 15:53:43 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:32.318 15:53:43 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:32.318 15:53:43 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:32.318 15:53:43 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@18 -- # shift 00:11:32.318 15:53:43 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.318 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.318 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:32.318 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:32.319 15:53:43 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.319 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.319 15:53:43 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:32.319 15:53:43 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:32.319 15:53:43 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:32.319 15:53:43 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:32.319 15:53:43 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:32.319 15:53:43 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@18 -- # shift 00:11:32.320 15:53:43 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.320 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:32.320 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.320 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.321 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.321 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:32.321 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:32.322 15:53:43 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:32.322 15:53:43 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:32.322 15:53:43 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:32.322 15:53:43 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:32.322 15:53:43 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:32.322 15:53:43 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:32.322 15:53:43 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:32.322 15:53:43 -- scripts/common.sh@15 -- # local i 00:11:32.322 15:53:43 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:32.322 15:53:43 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:32.322 15:53:43 -- scripts/common.sh@24 -- # return 0 00:11:32.322 15:53:43 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:32.322 15:53:43 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:32.322 15:53:43 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@18 -- # shift 00:11:32.322 15:53:43 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.322 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:32.322 15:53:43 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:32.322 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.323 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:32.323 15:53:43 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.323 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.324 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:32.324 15:53:43 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.324 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:32.325 15:53:43 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:32.325 15:53:43 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:32.325 15:53:43 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:32.325 15:53:43 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@18 -- # shift 00:11:32.325 15:53:43 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.325 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:32.325 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.325 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.326 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:32.326 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:32.326 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:32.327 15:53:43 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.327 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:32.327 15:53:43 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:32.327 15:53:43 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:32.327 15:53:43 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:32.327 15:53:43 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:32.327 15:53:43 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:32.327 15:53:43 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:32.327 15:53:43 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:32.327 15:53:43 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:32.327 15:53:43 -- scripts/common.sh@15 -- # local i 00:11:32.327 15:53:43 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:32.327 15:53:43 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:32.327 15:53:43 -- scripts/common.sh@24 -- # return 0 00:11:32.327 15:53:43 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:32.327 15:53:43 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:32.327 15:53:43 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:32.327 15:53:43 -- nvme/functions.sh@18 -- # shift 00:11:32.327 15:53:43 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.328 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:32.328 15:53:43 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.328 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:32.329 15:53:43 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.329 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.329 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.330 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:32.330 15:53:43 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.330 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:32.331 15:53:43 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:32.331 15:53:43 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:32.331 15:53:43 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:32.331 15:53:43 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@18 -- # shift 00:11:32.331 15:53:43 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.331 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:32.331 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.331 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:32.332 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.332 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.332 15:53:43 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.333 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.333 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.333 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.333 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.333 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.333 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.333 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.333 15:53:43 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:32.333 15:53:43 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.333 15:53:43 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.333 15:53:43 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:32.333 15:53:43 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:32.333 15:53:43 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:32.333 15:53:43 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:32.333 15:53:43 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:32.333 15:53:43 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:32.333 15:53:43 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:32.333 15:53:43 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:32.333 15:53:43 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:32.333 15:53:43 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:32.333 15:53:43 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:32.333 15:53:43 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:32.333 15:53:43 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:32.333 15:53:43 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:32.333 15:53:43 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:32.333 15:53:43 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:32.333 15:53:43 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:32.333 15:53:43 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:32.333 15:53:43 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:32.333 15:53:43 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:32.333 15:53:43 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:32.333 15:53:43 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:32.333 15:53:43 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:32.333 15:53:43 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:32.333 15:53:43 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:32.333 15:53:43 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:32.333 15:53:43 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:32.333 15:53:43 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:32.333 15:53:43 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:32.333 15:53:43 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:32.333 15:53:43 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:32.333 15:53:43 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:32.333 15:53:43 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:32.333 15:53:43 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:32.333 15:53:43 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:32.333 15:53:43 -- nvme/functions.sh@197 -- # echo nvme0 00:11:32.333 15:53:43 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:32.333 15:53:43 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:32.333 15:53:43 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:32.333 15:53:43 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:32.333 15:53:43 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:32.333 15:53:43 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:32.333 15:53:43 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:32.333 15:53:43 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:32.333 15:53:43 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:32.333 15:53:43 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:32.333 15:53:43 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:32.333 15:53:43 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:32.333 15:53:43 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:32.333 15:53:43 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:32.333 15:53:43 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:32.333 15:53:43 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:32.333 15:53:43 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:32.333 15:53:43 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:32.333 15:53:43 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:32.333 15:53:43 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:32.333 15:53:43 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:32.333 15:53:43 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:32.333 15:53:43 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:32.333 15:53:43 -- nvme/functions.sh@204 -- # trap - ERR 00:11:32.333 15:53:43 -- nvme/functions.sh@204 -- # print_backtrace 00:11:32.333 15:53:43 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:32.333 15:53:43 -- common/autotest_common.sh@1142 -- # return 0 00:11:32.333 15:53:43 -- nvme/functions.sh@204 -- # trap - ERR 00:11:32.333 15:53:43 -- nvme/functions.sh@204 -- # print_backtrace 00:11:32.333 15:53:43 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:32.333 15:53:43 -- common/autotest_common.sh@1142 -- # return 0 00:11:32.333 15:53:43 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:32.333 15:53:43 -- nvme/functions.sh@206 -- # echo nvme0 00:11:32.333 15:53:43 -- nvme/functions.sh@207 -- # return 0 00:11:32.333 15:53:43 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:32.333 15:53:43 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:32.333 15:53:43 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:33.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.544 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.544 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.544 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.544 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.544 15:53:44 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:33.544 15:53:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:33.544 15:53:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.544 15:53:44 -- common/autotest_common.sh@10 -- # set +x 00:11:33.544 ************************************ 00:11:33.544 START TEST nvme_flexible_data_placement 00:11:33.544 ************************************ 00:11:33.544 15:53:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:33.806 Initializing NVMe Controllers 00:11:33.806 Attaching to 0000:00:09.0 00:11:33.806 Controller supports FDP Attached to 0000:00:09.0 00:11:33.806 Namespace ID: 1 Endurance Group ID: 1 00:11:33.806 Initialization complete. 00:11:33.806 00:11:33.806 ================================== 00:11:33.806 == FDP tests for Namespace: #01 == 00:11:33.806 ================================== 00:11:33.806 00:11:33.806 Get Feature: FDP: 00:11:33.806 ================= 00:11:33.806 Enabled: Yes 00:11:33.806 FDP configuration Index: 0 00:11:33.806 00:11:33.806 FDP configurations log page 00:11:33.806 =========================== 00:11:33.806 Number of FDP configurations: 1 00:11:33.806 Version: 0 00:11:33.806 Size: 112 00:11:33.806 FDP Configuration Descriptor: 0 00:11:33.806 Descriptor Size: 96 00:11:33.806 Reclaim Group Identifier format: 2 00:11:33.806 FDP Volatile Write Cache: Not Present 00:11:33.806 FDP Configuration: Valid 00:11:33.806 Vendor Specific Size: 0 00:11:33.806 Number of Reclaim Groups: 2 00:11:33.806 Number of Recalim Unit Handles: 8 00:11:33.806 Max Placement Identifiers: 128 00:11:33.806 Number of Namespaces Suppprted: 256 00:11:33.806 Reclaim unit Nominal Size: 6000000 bytes 00:11:33.806 Estimated Reclaim Unit Time Limit: Not Reported 00:11:33.806 RUH Desc #000: RUH Type: Initially Isolated 00:11:33.806 RUH Desc #001: RUH Type: Initially Isolated 00:11:33.806 RUH Desc #002: RUH Type: Initially Isolated 00:11:33.806 RUH Desc #003: RUH Type: Initially Isolated 00:11:33.806 RUH Desc #004: RUH Type: Initially Isolated 00:11:33.806 RUH Desc #005: RUH Type: Initially Isolated 00:11:33.806 RUH Desc #006: RUH Type: Initially Isolated 00:11:33.806 RUH Desc #007: RUH Type: Initially Isolated 00:11:33.806 00:11:33.806 FDP reclaim unit handle usage log page 00:11:33.806 ====================================== 00:11:33.806 Number of Reclaim Unit Handles: 8 00:11:33.806 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:33.806 RUH Usage Desc #001: RUH Attributes: Unused 00:11:33.806 RUH Usage Desc #002: RUH Attributes: Unused 00:11:33.806 RUH Usage Desc #003: RUH Attributes: Unused 00:11:33.806 RUH Usage Desc #004: RUH Attributes: Unused 00:11:33.806 RUH Usage Desc #005: RUH Attributes: Unused 00:11:33.806 RUH Usage Desc #006: RUH Attributes: Unused 00:11:33.806 RUH Usage Desc #007: RUH Attributes: Unused 00:11:33.806 00:11:33.806 FDP statistics log page 00:11:33.806 ======================= 00:11:33.806 Host bytes with metadata written: 999055360 00:11:33.806 Media bytes with metadata written: 999276544 00:11:33.806 Media bytes erased: 0 00:11:33.806 00:11:33.806 FDP Reclaim unit handle status 00:11:33.806 ============================== 00:11:33.806 Number of RUHS descriptors: 2 00:11:33.806 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000073a 00:11:33.806 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:33.806 00:11:33.806 FDP write on placement id: 0 success 00:11:33.806 00:11:33.806 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:33.806 00:11:33.806 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:33.806 00:11:33.806 Get Feature: FDP Events for Placement handle: #0 00:11:33.806 ======================== 00:11:33.806 Number of FDP Events: 6 00:11:33.806 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:33.806 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:33.806 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:33.806 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:33.806 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:33.806 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:33.806 00:11:33.806 FDP events log page 00:11:33.806 =================== 00:11:33.806 Number of FDP events: 1 00:11:33.806 FDP Event #0: 00:11:33.806 Event Type: RU Not Written to Capacity 00:11:33.806 Placement Identifier: Valid 00:11:33.806 NSID: Valid 00:11:33.806 Location: Valid 00:11:33.806 Placement Identifier: 0 00:11:33.806 Event Timestamp: a 00:11:33.806 Namespace Identifier: 1 00:11:33.806 Reclaim Group Identifier: 0 00:11:33.806 Reclaim Unit Handle Identifier: 0 00:11:33.806 00:11:33.806 FDP test passed 00:11:33.806 ************************************ 00:11:33.806 END TEST nvme_flexible_data_placement 00:11:33.806 ************************************ 00:11:33.806 00:11:33.806 real 0m0.232s 00:11:33.806 user 0m0.062s 00:11:33.806 sys 0m0.068s 00:11:33.806 15:53:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:33.806 15:53:45 -- common/autotest_common.sh@10 -- # set +x 00:11:33.806 ************************************ 00:11:33.806 END TEST nvme_fdp 00:11:33.806 ************************************ 00:11:33.806 00:11:33.806 real 0m7.795s 00:11:33.806 user 0m1.075s 00:11:33.806 sys 0m1.473s 00:11:33.806 15:53:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:33.806 15:53:45 -- common/autotest_common.sh@10 -- # set +x 00:11:33.806 15:53:45 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:33.806 15:53:45 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:33.806 15:53:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:33.806 15:53:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.806 15:53:45 -- common/autotest_common.sh@10 -- # set +x 00:11:34.069 ************************************ 00:11:34.069 START TEST nvme_rpc 00:11:34.069 ************************************ 00:11:34.069 15:53:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:34.069 * Looking for test storage... 00:11:34.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:34.069 15:53:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:34.069 15:53:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:34.069 15:53:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:34.069 15:53:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:34.069 15:53:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:34.069 15:53:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:34.069 15:53:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:34.069 15:53:45 -- scripts/common.sh@335 -- # IFS=.-: 00:11:34.069 15:53:45 -- scripts/common.sh@335 -- # read -ra ver1 00:11:34.069 15:53:45 -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.069 15:53:45 -- scripts/common.sh@336 -- # read -ra ver2 00:11:34.069 15:53:45 -- scripts/common.sh@337 -- # local 'op=<' 00:11:34.069 15:53:45 -- scripts/common.sh@339 -- # ver1_l=2 00:11:34.069 15:53:45 -- scripts/common.sh@340 -- # ver2_l=1 00:11:34.069 15:53:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:34.069 15:53:45 -- scripts/common.sh@343 -- # case "$op" in 00:11:34.069 15:53:45 -- scripts/common.sh@344 -- # : 1 00:11:34.069 15:53:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:34.069 15:53:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.069 15:53:45 -- scripts/common.sh@364 -- # decimal 1 00:11:34.069 15:53:45 -- scripts/common.sh@352 -- # local d=1 00:11:34.069 15:53:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.069 15:53:45 -- scripts/common.sh@354 -- # echo 1 00:11:34.069 15:53:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:34.069 15:53:45 -- scripts/common.sh@365 -- # decimal 2 00:11:34.069 15:53:45 -- scripts/common.sh@352 -- # local d=2 00:11:34.069 15:53:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.069 15:53:45 -- scripts/common.sh@354 -- # echo 2 00:11:34.069 15:53:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:34.069 15:53:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:34.069 15:53:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:34.069 15:53:45 -- scripts/common.sh@367 -- # return 0 00:11:34.069 15:53:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.069 15:53:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:34.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.069 --rc genhtml_branch_coverage=1 00:11:34.069 --rc genhtml_function_coverage=1 00:11:34.069 --rc genhtml_legend=1 00:11:34.069 --rc geninfo_all_blocks=1 00:11:34.069 --rc geninfo_unexecuted_blocks=1 00:11:34.069 00:11:34.069 ' 00:11:34.069 15:53:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:34.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.069 --rc genhtml_branch_coverage=1 00:11:34.069 --rc genhtml_function_coverage=1 00:11:34.069 --rc genhtml_legend=1 00:11:34.069 --rc geninfo_all_blocks=1 00:11:34.069 --rc geninfo_unexecuted_blocks=1 00:11:34.069 00:11:34.069 ' 00:11:34.069 15:53:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:34.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.069 --rc genhtml_branch_coverage=1 00:11:34.069 --rc genhtml_function_coverage=1 00:11:34.069 --rc genhtml_legend=1 00:11:34.069 --rc geninfo_all_blocks=1 00:11:34.069 --rc geninfo_unexecuted_blocks=1 00:11:34.069 00:11:34.069 ' 00:11:34.069 15:53:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:34.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.069 --rc genhtml_branch_coverage=1 00:11:34.069 --rc genhtml_function_coverage=1 00:11:34.069 --rc genhtml_legend=1 00:11:34.069 --rc geninfo_all_blocks=1 00:11:34.069 --rc geninfo_unexecuted_blocks=1 00:11:34.069 00:11:34.069 ' 00:11:34.069 15:53:45 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:34.069 15:53:45 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:34.069 15:53:45 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:34.069 15:53:45 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:34.069 15:53:45 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:34.069 15:53:45 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:34.069 15:53:45 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:34.069 15:53:45 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:34.069 15:53:45 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:34.069 15:53:45 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:34.069 15:53:45 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:34.069 15:53:45 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:34.069 15:53:45 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:34.069 15:53:45 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:11:34.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.069 15:53:45 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:34.069 15:53:45 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66437 00:11:34.069 15:53:45 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:34.069 15:53:45 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66437 00:11:34.069 15:53:45 -- common/autotest_common.sh@829 -- # '[' -z 66437 ']' 00:11:34.069 15:53:45 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:34.069 15:53:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.069 15:53:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.069 15:53:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.069 15:53:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.069 15:53:45 -- common/autotest_common.sh@10 -- # set +x 00:11:34.330 [2024-11-29 15:53:45.535829] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:34.330 [2024-11-29 15:53:45.535995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66437 ] 00:11:34.330 [2024-11-29 15:53:45.687154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:34.590 [2024-11-29 15:53:45.907706] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:34.590 [2024-11-29 15:53:45.908161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.590 [2024-11-29 15:53:45.908214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.973 15:53:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:35.973 15:53:47 -- common/autotest_common.sh@862 -- # return 0 00:11:35.973 15:53:47 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:35.973 Nvme0n1 00:11:35.973 15:53:47 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:35.973 15:53:47 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:36.230 request: 00:11:36.230 { 00:11:36.230 "filename": "non_existing_file", 00:11:36.230 "bdev_name": "Nvme0n1", 00:11:36.230 "method": "bdev_nvme_apply_firmware", 00:11:36.230 "req_id": 1 00:11:36.230 } 00:11:36.230 Got JSON-RPC error response 00:11:36.230 response: 00:11:36.230 { 00:11:36.230 "code": -32603, 00:11:36.230 "message": "open file failed." 00:11:36.230 } 00:11:36.230 15:53:47 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:36.230 15:53:47 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:36.230 15:53:47 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:36.488 15:53:47 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:36.488 15:53:47 -- nvme/nvme_rpc.sh@40 -- # killprocess 66437 00:11:36.488 15:53:47 -- common/autotest_common.sh@936 -- # '[' -z 66437 ']' 00:11:36.488 15:53:47 -- common/autotest_common.sh@940 -- # kill -0 66437 00:11:36.488 15:53:47 -- common/autotest_common.sh@941 -- # uname 00:11:36.488 15:53:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:36.488 15:53:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66437 00:11:36.488 killing process with pid 66437 00:11:36.488 15:53:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:36.488 15:53:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:36.488 15:53:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66437' 00:11:36.488 15:53:47 -- common/autotest_common.sh@955 -- # kill 66437 00:11:36.488 15:53:47 -- common/autotest_common.sh@960 -- # wait 66437 00:11:37.874 ************************************ 00:11:37.874 END TEST nvme_rpc 00:11:37.874 ************************************ 00:11:37.874 00:11:37.874 real 0m3.907s 00:11:37.874 user 0m7.237s 00:11:37.874 sys 0m0.637s 00:11:37.874 15:53:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:37.874 15:53:49 -- common/autotest_common.sh@10 -- # set +x 00:11:37.874 15:53:49 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:37.874 15:53:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:37.874 15:53:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:37.874 15:53:49 -- common/autotest_common.sh@10 -- # set +x 00:11:37.874 ************************************ 00:11:37.874 START TEST nvme_rpc_timeouts 00:11:37.874 ************************************ 00:11:37.874 15:53:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:37.874 * Looking for test storage... 00:11:37.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:37.874 15:53:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:37.874 15:53:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:37.874 15:53:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:38.164 15:53:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:38.165 15:53:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:38.165 15:53:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:38.165 15:53:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:38.165 15:53:49 -- scripts/common.sh@335 -- # IFS=.-: 00:11:38.165 15:53:49 -- scripts/common.sh@335 -- # read -ra ver1 00:11:38.165 15:53:49 -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.165 15:53:49 -- scripts/common.sh@336 -- # read -ra ver2 00:11:38.165 15:53:49 -- scripts/common.sh@337 -- # local 'op=<' 00:11:38.165 15:53:49 -- scripts/common.sh@339 -- # ver1_l=2 00:11:38.165 15:53:49 -- scripts/common.sh@340 -- # ver2_l=1 00:11:38.165 15:53:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:38.165 15:53:49 -- scripts/common.sh@343 -- # case "$op" in 00:11:38.165 15:53:49 -- scripts/common.sh@344 -- # : 1 00:11:38.165 15:53:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:38.165 15:53:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.165 15:53:49 -- scripts/common.sh@364 -- # decimal 1 00:11:38.165 15:53:49 -- scripts/common.sh@352 -- # local d=1 00:11:38.165 15:53:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.165 15:53:49 -- scripts/common.sh@354 -- # echo 1 00:11:38.165 15:53:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:38.165 15:53:49 -- scripts/common.sh@365 -- # decimal 2 00:11:38.165 15:53:49 -- scripts/common.sh@352 -- # local d=2 00:11:38.165 15:53:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.165 15:53:49 -- scripts/common.sh@354 -- # echo 2 00:11:38.165 15:53:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:38.165 15:53:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:38.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.165 15:53:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:38.165 15:53:49 -- scripts/common.sh@367 -- # return 0 00:11:38.165 15:53:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.165 15:53:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:38.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.165 --rc genhtml_branch_coverage=1 00:11:38.165 --rc genhtml_function_coverage=1 00:11:38.165 --rc genhtml_legend=1 00:11:38.165 --rc geninfo_all_blocks=1 00:11:38.165 --rc geninfo_unexecuted_blocks=1 00:11:38.165 00:11:38.165 ' 00:11:38.165 15:53:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:38.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.165 --rc genhtml_branch_coverage=1 00:11:38.165 --rc genhtml_function_coverage=1 00:11:38.165 --rc genhtml_legend=1 00:11:38.165 --rc geninfo_all_blocks=1 00:11:38.165 --rc geninfo_unexecuted_blocks=1 00:11:38.165 00:11:38.165 ' 00:11:38.165 15:53:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:38.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.165 --rc genhtml_branch_coverage=1 00:11:38.165 --rc genhtml_function_coverage=1 00:11:38.165 --rc genhtml_legend=1 00:11:38.165 --rc geninfo_all_blocks=1 00:11:38.165 --rc geninfo_unexecuted_blocks=1 00:11:38.165 00:11:38.165 ' 00:11:38.165 15:53:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:38.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.165 --rc genhtml_branch_coverage=1 00:11:38.165 --rc genhtml_function_coverage=1 00:11:38.165 --rc genhtml_legend=1 00:11:38.165 --rc geninfo_all_blocks=1 00:11:38.165 --rc geninfo_unexecuted_blocks=1 00:11:38.165 00:11:38.165 ' 00:11:38.165 15:53:49 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:38.165 15:53:49 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66510 00:11:38.165 15:53:49 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66510 00:11:38.165 15:53:49 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66541 00:11:38.165 15:53:49 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:38.165 15:53:49 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66541 00:11:38.165 15:53:49 -- common/autotest_common.sh@829 -- # '[' -z 66541 ']' 00:11:38.165 15:53:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.165 15:53:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.165 15:53:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.165 15:53:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.165 15:53:49 -- common/autotest_common.sh@10 -- # set +x 00:11:38.165 15:53:49 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:38.165 [2024-11-29 15:53:49.450986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:38.165 [2024-11-29 15:53:49.451129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66541 ] 00:11:38.424 [2024-11-29 15:53:49.601766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:38.682 [2024-11-29 15:53:49.863595] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:38.682 [2024-11-29 15:53:49.864480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.682 [2024-11-29 15:53:49.864557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.622 15:53:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:39.622 15:53:50 -- common/autotest_common.sh@862 -- # return 0 00:11:39.622 15:53:50 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:39.622 Checking default timeout settings: 00:11:39.622 15:53:50 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:39.883 15:53:51 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:39.883 Making settings changes with rpc: 00:11:39.883 15:53:51 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:40.141 15:53:51 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:40.141 Check default vs. modified settings: 00:11:40.141 15:53:51 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66510 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66510 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:40.399 Setting action_on_timeout is changed as expected. 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66510 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66510 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:40.399 Setting timeout_us is changed as expected. 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66510 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66510 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:40.399 Setting timeout_admin_us is changed as expected. 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66510 /tmp/settings_modified_66510 00:11:40.399 15:53:51 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66541 00:11:40.399 15:53:51 -- common/autotest_common.sh@936 -- # '[' -z 66541 ']' 00:11:40.399 15:53:51 -- common/autotest_common.sh@940 -- # kill -0 66541 00:11:40.399 15:53:51 -- common/autotest_common.sh@941 -- # uname 00:11:40.399 15:53:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:40.399 15:53:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66541 00:11:40.399 killing process with pid 66541 00:11:40.399 15:53:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:40.399 15:53:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:40.399 15:53:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66541' 00:11:40.399 15:53:51 -- common/autotest_common.sh@955 -- # kill 66541 00:11:40.399 15:53:51 -- common/autotest_common.sh@960 -- # wait 66541 00:11:41.774 RPC TIMEOUT SETTING TEST PASSED. 00:11:41.774 15:53:53 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:41.774 00:11:41.774 real 0m3.906s 00:11:41.774 user 0m7.313s 00:11:41.774 sys 0m0.697s 00:11:41.774 ************************************ 00:11:41.774 END TEST nvme_rpc_timeouts 00:11:41.774 ************************************ 00:11:41.774 15:53:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:41.774 15:53:53 -- common/autotest_common.sh@10 -- # set +x 00:11:41.774 15:53:53 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:11:41.774 15:53:53 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:11:41.774 15:53:53 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:41.774 15:53:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:41.774 15:53:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:41.774 15:53:53 -- common/autotest_common.sh@10 -- # set +x 00:11:41.774 ************************************ 00:11:41.774 START TEST nvme_xnvme 00:11:41.774 ************************************ 00:11:41.774 15:53:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:42.035 * Looking for test storage... 00:11:42.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:42.035 15:53:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:42.035 15:53:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:42.035 15:53:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:42.035 15:53:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:42.035 15:53:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:42.035 15:53:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:42.035 15:53:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:42.035 15:53:53 -- scripts/common.sh@335 -- # IFS=.-: 00:11:42.035 15:53:53 -- scripts/common.sh@335 -- # read -ra ver1 00:11:42.035 15:53:53 -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.035 15:53:53 -- scripts/common.sh@336 -- # read -ra ver2 00:11:42.035 15:53:53 -- scripts/common.sh@337 -- # local 'op=<' 00:11:42.035 15:53:53 -- scripts/common.sh@339 -- # ver1_l=2 00:11:42.035 15:53:53 -- scripts/common.sh@340 -- # ver2_l=1 00:11:42.035 15:53:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:42.035 15:53:53 -- scripts/common.sh@343 -- # case "$op" in 00:11:42.035 15:53:53 -- scripts/common.sh@344 -- # : 1 00:11:42.035 15:53:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:42.035 15:53:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.035 15:53:53 -- scripts/common.sh@364 -- # decimal 1 00:11:42.035 15:53:53 -- scripts/common.sh@352 -- # local d=1 00:11:42.035 15:53:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.035 15:53:53 -- scripts/common.sh@354 -- # echo 1 00:11:42.035 15:53:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:42.035 15:53:53 -- scripts/common.sh@365 -- # decimal 2 00:11:42.035 15:53:53 -- scripts/common.sh@352 -- # local d=2 00:11:42.035 15:53:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.035 15:53:53 -- scripts/common.sh@354 -- # echo 2 00:11:42.035 15:53:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:42.035 15:53:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:42.035 15:53:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:42.035 15:53:53 -- scripts/common.sh@367 -- # return 0 00:11:42.035 15:53:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.035 15:53:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:42.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.035 --rc genhtml_branch_coverage=1 00:11:42.035 --rc genhtml_function_coverage=1 00:11:42.035 --rc genhtml_legend=1 00:11:42.035 --rc geninfo_all_blocks=1 00:11:42.035 --rc geninfo_unexecuted_blocks=1 00:11:42.035 00:11:42.035 ' 00:11:42.035 15:53:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:42.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.035 --rc genhtml_branch_coverage=1 00:11:42.035 --rc genhtml_function_coverage=1 00:11:42.035 --rc genhtml_legend=1 00:11:42.035 --rc geninfo_all_blocks=1 00:11:42.035 --rc geninfo_unexecuted_blocks=1 00:11:42.035 00:11:42.035 ' 00:11:42.035 15:53:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:42.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.035 --rc genhtml_branch_coverage=1 00:11:42.035 --rc genhtml_function_coverage=1 00:11:42.035 --rc genhtml_legend=1 00:11:42.035 --rc geninfo_all_blocks=1 00:11:42.035 --rc geninfo_unexecuted_blocks=1 00:11:42.035 00:11:42.035 ' 00:11:42.035 15:53:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:42.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.035 --rc genhtml_branch_coverage=1 00:11:42.035 --rc genhtml_function_coverage=1 00:11:42.035 --rc genhtml_legend=1 00:11:42.035 --rc geninfo_all_blocks=1 00:11:42.035 --rc geninfo_unexecuted_blocks=1 00:11:42.035 00:11:42.035 ' 00:11:42.035 15:53:53 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:42.035 15:53:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.035 15:53:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.035 15:53:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.035 15:53:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.035 15:53:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.035 15:53:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.035 15:53:53 -- paths/export.sh@5 -- # export PATH 00:11:42.035 15:53:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.035 15:53:53 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:42.035 15:53:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:42.035 15:53:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:42.035 15:53:53 -- common/autotest_common.sh@10 -- # set +x 00:11:42.035 ************************************ 00:11:42.035 START TEST xnvme_to_malloc_dd_copy 00:11:42.035 ************************************ 00:11:42.035 15:53:53 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:11:42.035 15:53:53 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:42.035 15:53:53 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:42.035 15:53:53 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:42.035 15:53:53 -- dd/common.sh@191 -- # return 00:11:42.035 15:53:53 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:42.035 15:53:53 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:42.035 15:53:53 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:42.035 15:53:53 -- xnvme/xnvme.sh@18 -- # local io 00:11:42.035 15:53:53 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:42.035 15:53:53 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:42.035 15:53:53 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:42.035 15:53:53 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:42.036 15:53:53 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:42.036 15:53:53 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:42.036 15:53:53 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:42.036 15:53:53 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:42.036 15:53:53 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:42.036 15:53:53 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:42.036 15:53:53 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:42.036 15:53:53 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:42.036 15:53:53 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:42.036 15:53:53 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:42.036 15:53:53 -- dd/common.sh@31 -- # xtrace_disable 00:11:42.036 15:53:53 -- common/autotest_common.sh@10 -- # set +x 00:11:42.036 { 00:11:42.036 "subsystems": [ 00:11:42.036 { 00:11:42.036 "subsystem": "bdev", 00:11:42.036 "config": [ 00:11:42.036 { 00:11:42.036 "params": { 00:11:42.036 "block_size": 512, 00:11:42.036 "num_blocks": 2097152, 00:11:42.036 "name": "malloc0" 00:11:42.036 }, 00:11:42.036 "method": "bdev_malloc_create" 00:11:42.036 }, 00:11:42.036 { 00:11:42.036 "params": { 00:11:42.036 "io_mechanism": "libaio", 00:11:42.036 "filename": "/dev/nullb0", 00:11:42.036 "name": "null0" 00:11:42.036 }, 00:11:42.036 "method": "bdev_xnvme_create" 00:11:42.036 }, 00:11:42.036 { 00:11:42.036 "method": "bdev_wait_for_examine" 00:11:42.036 } 00:11:42.036 ] 00:11:42.036 } 00:11:42.036 ] 00:11:42.036 } 00:11:42.036 [2024-11-29 15:53:53.423810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:42.036 [2024-11-29 15:53:53.424086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66686 ] 00:11:42.296 [2024-11-29 15:53:53.574097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.555 [2024-11-29 15:53:53.752232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.461  [2024-11-29T15:53:56.835Z] Copying: 236/1024 [MB] (236 MBps) [2024-11-29T15:53:57.770Z] Copying: 485/1024 [MB] (249 MBps) [2024-11-29T15:53:58.705Z] Copying: 798/1024 [MB] (312 MBps) [2024-11-29T15:54:00.609Z] Copying: 1024/1024 [MB] (average 274 MBps) 00:11:49.178 00:11:49.178 15:54:00 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:49.178 15:54:00 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:49.178 15:54:00 -- dd/common.sh@31 -- # xtrace_disable 00:11:49.178 15:54:00 -- common/autotest_common.sh@10 -- # set +x 00:11:49.178 { 00:11:49.178 "subsystems": [ 00:11:49.178 { 00:11:49.178 "subsystem": "bdev", 00:11:49.178 "config": [ 00:11:49.178 { 00:11:49.178 "params": { 00:11:49.178 "block_size": 512, 00:11:49.178 "num_blocks": 2097152, 00:11:49.178 "name": "malloc0" 00:11:49.178 }, 00:11:49.178 "method": "bdev_malloc_create" 00:11:49.178 }, 00:11:49.178 { 00:11:49.178 "params": { 00:11:49.178 "io_mechanism": "libaio", 00:11:49.178 "filename": "/dev/nullb0", 00:11:49.178 "name": "null0" 00:11:49.178 }, 00:11:49.178 "method": "bdev_xnvme_create" 00:11:49.178 }, 00:11:49.178 { 00:11:49.178 "method": "bdev_wait_for_examine" 00:11:49.178 } 00:11:49.178 ] 00:11:49.178 } 00:11:49.178 ] 00:11:49.178 } 00:11:49.178 [2024-11-29 15:54:00.513204] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:49.178 [2024-11-29 15:54:00.513335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66773 ] 00:11:49.438 [2024-11-29 15:54:00.664629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.438 [2024-11-29 15:54:00.804379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.339  [2024-11-29T15:54:03.707Z] Copying: 314/1024 [MB] (314 MBps) [2024-11-29T15:54:04.642Z] Copying: 628/1024 [MB] (314 MBps) [2024-11-29T15:54:04.900Z] Copying: 943/1024 [MB] (315 MBps) [2024-11-29T15:54:06.803Z] Copying: 1024/1024 [MB] (average 314 MBps) 00:11:55.372 00:11:55.631 15:54:06 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:55.631 15:54:06 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:55.631 15:54:06 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:55.631 15:54:06 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:55.631 15:54:06 -- dd/common.sh@31 -- # xtrace_disable 00:11:55.631 15:54:06 -- common/autotest_common.sh@10 -- # set +x 00:11:55.631 { 00:11:55.631 "subsystems": [ 00:11:55.631 { 00:11:55.631 "subsystem": "bdev", 00:11:55.631 "config": [ 00:11:55.631 { 00:11:55.631 "params": { 00:11:55.631 "block_size": 512, 00:11:55.631 "num_blocks": 2097152, 00:11:55.631 "name": "malloc0" 00:11:55.631 }, 00:11:55.631 "method": "bdev_malloc_create" 00:11:55.631 }, 00:11:55.631 { 00:11:55.631 "params": { 00:11:55.631 "io_mechanism": "io_uring", 00:11:55.631 "filename": "/dev/nullb0", 00:11:55.631 "name": "null0" 00:11:55.631 }, 00:11:55.631 "method": "bdev_xnvme_create" 00:11:55.631 }, 00:11:55.631 { 00:11:55.631 "method": "bdev_wait_for_examine" 00:11:55.631 } 00:11:55.631 ] 00:11:55.631 } 00:11:55.631 ] 00:11:55.631 } 00:11:55.631 [2024-11-29 15:54:06.869270] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:55.631 [2024-11-29 15:54:06.869391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66851 ] 00:11:55.631 [2024-11-29 15:54:07.019654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.890 [2024-11-29 15:54:07.157460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.790  [2024-11-29T15:54:10.156Z] Copying: 321/1024 [MB] (321 MBps) [2024-11-29T15:54:11.089Z] Copying: 643/1024 [MB] (321 MBps) [2024-11-29T15:54:11.347Z] Copying: 964/1024 [MB] (321 MBps) [2024-11-29T15:54:13.250Z] Copying: 1024/1024 [MB] (average 321 MBps) 00:12:01.819 00:12:01.819 15:54:13 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:01.819 15:54:13 -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:01.819 15:54:13 -- dd/common.sh@31 -- # xtrace_disable 00:12:01.819 15:54:13 -- common/autotest_common.sh@10 -- # set +x 00:12:01.819 { 00:12:01.819 "subsystems": [ 00:12:01.819 { 00:12:01.819 "subsystem": "bdev", 00:12:01.819 "config": [ 00:12:01.819 { 00:12:01.819 "params": { 00:12:01.819 "block_size": 512, 00:12:01.819 "num_blocks": 2097152, 00:12:01.819 "name": "malloc0" 00:12:01.819 }, 00:12:01.819 "method": "bdev_malloc_create" 00:12:01.819 }, 00:12:01.819 { 00:12:01.819 "params": { 00:12:01.819 "io_mechanism": "io_uring", 00:12:01.819 "filename": "/dev/nullb0", 00:12:01.819 "name": "null0" 00:12:01.819 }, 00:12:01.819 "method": "bdev_xnvme_create" 00:12:01.819 }, 00:12:01.819 { 00:12:01.819 "method": "bdev_wait_for_examine" 00:12:01.819 } 00:12:01.819 ] 00:12:01.819 } 00:12:01.819 ] 00:12:01.819 } 00:12:01.819 [2024-11-29 15:54:13.119763] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:01.819 [2024-11-29 15:54:13.119995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66927 ] 00:12:02.077 [2024-11-29 15:54:13.268042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.077 [2024-11-29 15:54:13.405214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.977  [2024-11-29T15:54:16.343Z] Copying: 326/1024 [MB] (326 MBps) [2024-11-29T15:54:17.277Z] Copying: 653/1024 [MB] (326 MBps) [2024-11-29T15:54:17.546Z] Copying: 980/1024 [MB] (327 MBps) [2024-11-29T15:54:19.449Z] Copying: 1024/1024 [MB] (average 326 MBps) 00:12:08.018 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:08.018 15:54:19 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:08.018 ************************************ 00:12:08.018 END TEST xnvme_to_malloc_dd_copy 00:12:08.018 00:12:08.018 real 0m25.924s 00:12:08.018 user 0m22.821s 00:12:08.018 sys 0m2.550s 00:12:08.018 15:54:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:08.018 15:54:19 -- common/autotest_common.sh@10 -- # set +x 00:12:08.018 ************************************ 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:08.018 15:54:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:08.018 15:54:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:08.018 15:54:19 -- common/autotest_common.sh@10 -- # set +x 00:12:08.018 ************************************ 00:12:08.018 START TEST xnvme_bdevperf 00:12:08.018 ************************************ 00:12:08.018 15:54:19 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:08.018 15:54:19 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:08.018 15:54:19 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:08.018 15:54:19 -- dd/common.sh@191 -- # return 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@60 -- # local io 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:08.018 15:54:19 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:08.018 15:54:19 -- dd/common.sh@31 -- # xtrace_disable 00:12:08.018 15:54:19 -- common/autotest_common.sh@10 -- # set +x 00:12:08.018 { 00:12:08.018 "subsystems": [ 00:12:08.018 { 00:12:08.018 "subsystem": "bdev", 00:12:08.018 "config": [ 00:12:08.018 { 00:12:08.018 "params": { 00:12:08.018 "io_mechanism": "libaio", 00:12:08.018 "filename": "/dev/nullb0", 00:12:08.018 "name": "null0" 00:12:08.018 }, 00:12:08.018 "method": "bdev_xnvme_create" 00:12:08.018 }, 00:12:08.018 { 00:12:08.018 "method": "bdev_wait_for_examine" 00:12:08.018 } 00:12:08.018 ] 00:12:08.018 } 00:12:08.018 ] 00:12:08.018 } 00:12:08.018 [2024-11-29 15:54:19.404364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:08.018 [2024-11-29 15:54:19.404463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67024 ] 00:12:08.277 [2024-11-29 15:54:19.549479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.277 [2024-11-29 15:54:19.695702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.535 Running I/O for 5 seconds... 00:12:13.799 00:12:13.799 Latency(us) 00:12:13.799 [2024-11-29T15:54:25.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.799 [2024-11-29T15:54:25.230Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:13.799 null0 : 5.00 209136.82 816.94 0.00 0.00 303.84 113.43 447.41 00:12:13.799 [2024-11-29T15:54:25.230Z] =================================================================================================================== 00:12:13.799 [2024-11-29T15:54:25.230Z] Total : 209136.82 816.94 0.00 0.00 303.84 113.43 447.41 00:12:14.368 15:54:25 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:14.368 15:54:25 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:14.368 15:54:25 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:14.368 15:54:25 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:14.368 15:54:25 -- dd/common.sh@31 -- # xtrace_disable 00:12:14.368 15:54:25 -- common/autotest_common.sh@10 -- # set +x 00:12:14.368 { 00:12:14.368 "subsystems": [ 00:12:14.368 { 00:12:14.368 "subsystem": "bdev", 00:12:14.368 "config": [ 00:12:14.368 { 00:12:14.368 "params": { 00:12:14.368 "io_mechanism": "io_uring", 00:12:14.368 "filename": "/dev/nullb0", 00:12:14.368 "name": "null0" 00:12:14.368 }, 00:12:14.368 "method": "bdev_xnvme_create" 00:12:14.368 }, 00:12:14.368 { 00:12:14.368 "method": "bdev_wait_for_examine" 00:12:14.368 } 00:12:14.368 ] 00:12:14.368 } 00:12:14.368 ] 00:12:14.368 } 00:12:14.368 [2024-11-29 15:54:25.585950] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:14.368 [2024-11-29 15:54:25.586079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67099 ] 00:12:14.368 [2024-11-29 15:54:25.736654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.628 [2024-11-29 15:54:25.957172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.889 Running I/O for 5 seconds... 00:12:20.162 00:12:20.162 Latency(us) 00:12:20.162 [2024-11-29T15:54:31.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.162 [2024-11-29T15:54:31.593Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:20.162 null0 : 5.00 227866.27 890.10 0.00 0.00 278.52 150.45 469.46 00:12:20.162 [2024-11-29T15:54:31.593Z] =================================================================================================================== 00:12:20.162 [2024-11-29T15:54:31.593Z] Total : 227866.27 890.10 0.00 0.00 278.52 150.45 469.46 00:12:20.450 15:54:31 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:20.450 15:54:31 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:20.719 00:12:20.719 real 0m12.549s 00:12:20.719 user 0m10.069s 00:12:20.719 sys 0m2.250s 00:12:20.719 15:54:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:20.719 ************************************ 00:12:20.719 END TEST xnvme_bdevperf 00:12:20.719 ************************************ 00:12:20.719 15:54:31 -- common/autotest_common.sh@10 -- # set +x 00:12:20.719 ************************************ 00:12:20.719 END TEST nvme_xnvme 00:12:20.719 ************************************ 00:12:20.719 00:12:20.719 real 0m38.742s 00:12:20.719 user 0m33.000s 00:12:20.719 sys 0m4.922s 00:12:20.719 15:54:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:20.719 15:54:31 -- common/autotest_common.sh@10 -- # set +x 00:12:20.719 15:54:31 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:20.719 15:54:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:20.719 15:54:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:20.719 15:54:31 -- common/autotest_common.sh@10 -- # set +x 00:12:20.719 ************************************ 00:12:20.719 START TEST blockdev_xnvme 00:12:20.719 ************************************ 00:12:20.719 15:54:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:20.719 * Looking for test storage... 00:12:20.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:20.719 15:54:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:20.719 15:54:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:20.719 15:54:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:20.719 15:54:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:20.719 15:54:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:20.719 15:54:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:20.719 15:54:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:20.719 15:54:32 -- scripts/common.sh@335 -- # IFS=.-: 00:12:20.719 15:54:32 -- scripts/common.sh@335 -- # read -ra ver1 00:12:20.719 15:54:32 -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.719 15:54:32 -- scripts/common.sh@336 -- # read -ra ver2 00:12:20.719 15:54:32 -- scripts/common.sh@337 -- # local 'op=<' 00:12:20.719 15:54:32 -- scripts/common.sh@339 -- # ver1_l=2 00:12:20.719 15:54:32 -- scripts/common.sh@340 -- # ver2_l=1 00:12:20.719 15:54:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:20.719 15:54:32 -- scripts/common.sh@343 -- # case "$op" in 00:12:20.719 15:54:32 -- scripts/common.sh@344 -- # : 1 00:12:20.719 15:54:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:20.719 15:54:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.719 15:54:32 -- scripts/common.sh@364 -- # decimal 1 00:12:20.719 15:54:32 -- scripts/common.sh@352 -- # local d=1 00:12:20.719 15:54:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.719 15:54:32 -- scripts/common.sh@354 -- # echo 1 00:12:20.719 15:54:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:20.719 15:54:32 -- scripts/common.sh@365 -- # decimal 2 00:12:20.719 15:54:32 -- scripts/common.sh@352 -- # local d=2 00:12:20.719 15:54:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.719 15:54:32 -- scripts/common.sh@354 -- # echo 2 00:12:20.719 15:54:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:20.719 15:54:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:20.719 15:54:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:20.719 15:54:32 -- scripts/common.sh@367 -- # return 0 00:12:20.719 15:54:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.719 15:54:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:20.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.720 --rc genhtml_branch_coverage=1 00:12:20.720 --rc genhtml_function_coverage=1 00:12:20.720 --rc genhtml_legend=1 00:12:20.720 --rc geninfo_all_blocks=1 00:12:20.720 --rc geninfo_unexecuted_blocks=1 00:12:20.720 00:12:20.720 ' 00:12:20.720 15:54:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:20.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.720 --rc genhtml_branch_coverage=1 00:12:20.720 --rc genhtml_function_coverage=1 00:12:20.720 --rc genhtml_legend=1 00:12:20.720 --rc geninfo_all_blocks=1 00:12:20.720 --rc geninfo_unexecuted_blocks=1 00:12:20.720 00:12:20.720 ' 00:12:20.720 15:54:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:20.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.720 --rc genhtml_branch_coverage=1 00:12:20.720 --rc genhtml_function_coverage=1 00:12:20.720 --rc genhtml_legend=1 00:12:20.720 --rc geninfo_all_blocks=1 00:12:20.720 --rc geninfo_unexecuted_blocks=1 00:12:20.720 00:12:20.720 ' 00:12:20.720 15:54:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:20.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.720 --rc genhtml_branch_coverage=1 00:12:20.720 --rc genhtml_function_coverage=1 00:12:20.720 --rc genhtml_legend=1 00:12:20.720 --rc geninfo_all_blocks=1 00:12:20.720 --rc geninfo_unexecuted_blocks=1 00:12:20.720 00:12:20.720 ' 00:12:20.720 15:54:32 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:20.720 15:54:32 -- bdev/nbd_common.sh@6 -- # set -e 00:12:20.720 15:54:32 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:20.720 15:54:32 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:20.720 15:54:32 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:20.720 15:54:32 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:20.720 15:54:32 -- bdev/blockdev.sh@18 -- # : 00:12:20.720 15:54:32 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:20.720 15:54:32 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:20.720 15:54:32 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:20.720 15:54:32 -- bdev/blockdev.sh@672 -- # uname -s 00:12:20.720 15:54:32 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:20.720 15:54:32 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:20.720 15:54:32 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:12:20.720 15:54:32 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:20.720 15:54:32 -- bdev/blockdev.sh@682 -- # dek= 00:12:20.720 15:54:32 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:20.720 15:54:32 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:20.720 15:54:32 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:20.720 15:54:32 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:12:20.720 15:54:32 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:12:20.720 15:54:32 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:20.720 15:54:32 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=67246 00:12:20.720 15:54:32 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:20.720 15:54:32 -- bdev/blockdev.sh@47 -- # waitforlisten 67246 00:12:20.720 15:54:32 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:20.720 15:54:32 -- common/autotest_common.sh@829 -- # '[' -z 67246 ']' 00:12:20.720 15:54:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.720 15:54:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.720 15:54:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.720 15:54:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.720 15:54:32 -- common/autotest_common.sh@10 -- # set +x 00:12:20.979 [2024-11-29 15:54:32.213016] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:20.979 [2024-11-29 15:54:32.213263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67246 ] 00:12:20.979 [2024-11-29 15:54:32.351868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.238 [2024-11-29 15:54:32.487323] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:21.238 [2024-11-29 15:54:32.487599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.804 15:54:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.804 15:54:32 -- common/autotest_common.sh@862 -- # return 0 00:12:21.804 15:54:32 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:21.804 15:54:32 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:12:21.804 15:54:32 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:12:21.804 15:54:32 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:12:21.804 15:54:32 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:22.063 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:22.063 Waiting for block devices as requested 00:12:22.063 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:22.321 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:22.321 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:22.321 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:27.589 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:27.589 15:54:38 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:12:27.589 15:54:38 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:12:27.590 15:54:38 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:12:27.590 15:54:38 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:12:27.590 15:54:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:27.590 15:54:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:12:27.590 15:54:38 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:12:27.590 15:54:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:12:27.590 15:54:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:27.590 15:54:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:27.590 15:54:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:12:27.590 15:54:38 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:12:27.590 15:54:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:27.590 15:54:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:27.590 15:54:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:27.590 15:54:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:12:27.590 15:54:38 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:12:27.590 15:54:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:27.590 15:54:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:27.590 15:54:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:27.590 15:54:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:12:27.590 15:54:38 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:12:27.590 15:54:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:27.590 15:54:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:27.590 15:54:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:27.590 15:54:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:12:27.590 15:54:38 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:12:27.590 15:54:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:27.590 15:54:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:27.590 15:54:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:27.590 15:54:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:12:27.590 15:54:38 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:12:27.590 15:54:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:27.590 15:54:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:27.590 15:54:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:27.590 15:54:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:12:27.590 15:54:38 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:12:27.590 15:54:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:27.590 15:54:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:27.590 15:54:38 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:27.590 15:54:38 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:27.590 15:54:38 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:27.590 15:54:38 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:27.590 15:54:38 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:27.590 15:54:38 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:27.590 15:54:38 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:27.590 15:54:38 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:27.590 15:54:38 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:27.590 15:54:38 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:27.590 15:54:38 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:27.590 15:54:38 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:12:27.590 15:54:38 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:12:27.590 15:54:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.590 15:54:38 -- common/autotest_common.sh@10 -- # set +x 00:12:27.590 15:54:38 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:27.590 nvme0n1 00:12:27.590 nvme1n1 00:12:27.590 nvme1n2 00:12:27.590 nvme1n3 00:12:27.590 nvme2n1 00:12:27.590 nvme3n1 00:12:27.590 15:54:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:27.590 15:54:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.590 15:54:38 -- common/autotest_common.sh@10 -- # set +x 00:12:27.590 15:54:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@738 -- # cat 00:12:27.590 15:54:38 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:27.590 15:54:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.590 15:54:38 -- common/autotest_common.sh@10 -- # set +x 00:12:27.590 15:54:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:27.590 15:54:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.590 15:54:38 -- common/autotest_common.sh@10 -- # set +x 00:12:27.590 15:54:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:27.590 15:54:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.590 15:54:38 -- common/autotest_common.sh@10 -- # set +x 00:12:27.590 15:54:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:27.590 15:54:38 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:27.590 15:54:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.590 15:54:38 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:27.590 15:54:38 -- common/autotest_common.sh@10 -- # set +x 00:12:27.590 15:54:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.590 15:54:38 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:27.590 15:54:38 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:27.590 15:54:38 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "176b26a5-111e-42cb-b922-ab343c0138be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "176b26a5-111e-42cb-b922-ab343c0138be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "4269f46c-65e0-443c-bb1e-9d5f2ccad513"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4269f46c-65e0-443c-bb1e-9d5f2ccad513",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "fd1775ec-2511-457c-8cfe-d1f598d6f560"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fd1775ec-2511-457c-8cfe-d1f598d6f560",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "1ebc0894-1340-4cbf-86bd-ed96910e442a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1ebc0894-1340-4cbf-86bd-ed96910e442a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "146232ea-02a0-4d04-b38f-ea8c477d5cfa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "146232ea-02a0-4d04-b38f-ea8c477d5cfa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "7b445d27-eb2e-4062-992d-7c23de4c0cd7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7b445d27-eb2e-4062-992d-7c23de4c0cd7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:27.590 15:54:38 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:27.590 15:54:38 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:12:27.590 15:54:38 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:27.590 15:54:38 -- bdev/blockdev.sh@752 -- # killprocess 67246 00:12:27.591 15:54:38 -- common/autotest_common.sh@936 -- # '[' -z 67246 ']' 00:12:27.591 15:54:38 -- common/autotest_common.sh@940 -- # kill -0 67246 00:12:27.591 15:54:38 -- common/autotest_common.sh@941 -- # uname 00:12:27.591 15:54:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:27.591 15:54:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67246 00:12:27.591 killing process with pid 67246 00:12:27.591 15:54:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:27.591 15:54:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:27.591 15:54:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67246' 00:12:27.591 15:54:38 -- common/autotest_common.sh@955 -- # kill 67246 00:12:27.591 15:54:38 -- common/autotest_common.sh@960 -- # wait 67246 00:12:28.965 15:54:40 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:28.965 15:54:40 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:28.965 15:54:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:28.965 15:54:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:28.965 15:54:40 -- common/autotest_common.sh@10 -- # set +x 00:12:28.965 ************************************ 00:12:28.965 START TEST bdev_hello_world 00:12:28.965 ************************************ 00:12:28.966 15:54:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:28.966 [2024-11-29 15:54:40.190809] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:28.966 [2024-11-29 15:54:40.191060] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67626 ] 00:12:28.966 [2024-11-29 15:54:40.339958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.224 [2024-11-29 15:54:40.478289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.483 [2024-11-29 15:54:40.758635] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:29.483 [2024-11-29 15:54:40.758818] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:29.483 [2024-11-29 15:54:40.758878] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:29.483 [2024-11-29 15:54:40.760305] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:29.483 [2024-11-29 15:54:40.760640] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:29.483 [2024-11-29 15:54:40.760708] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:29.483 [2024-11-29 15:54:40.760916] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:29.483 00:12:29.483 [2024-11-29 15:54:40.760938] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:30.049 00:12:30.049 ************************************ 00:12:30.049 END TEST bdev_hello_world 00:12:30.049 ************************************ 00:12:30.049 real 0m1.236s 00:12:30.049 user 0m0.960s 00:12:30.049 sys 0m0.164s 00:12:30.049 15:54:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:30.049 15:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:30.049 15:54:41 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:30.049 15:54:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:30.049 15:54:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.049 15:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:30.049 ************************************ 00:12:30.049 START TEST bdev_bounds 00:12:30.049 ************************************ 00:12:30.049 15:54:41 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:12:30.049 Process bdevio pid: 67657 00:12:30.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.049 15:54:41 -- bdev/blockdev.sh@288 -- # bdevio_pid=67657 00:12:30.049 15:54:41 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:30.049 15:54:41 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 67657' 00:12:30.049 15:54:41 -- bdev/blockdev.sh@291 -- # waitforlisten 67657 00:12:30.049 15:54:41 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:30.049 15:54:41 -- common/autotest_common.sh@829 -- # '[' -z 67657 ']' 00:12:30.049 15:54:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.049 15:54:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.049 15:54:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.049 15:54:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.049 15:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:30.308 [2024-11-29 15:54:41.488175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:30.308 [2024-11-29 15:54:41.488288] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67657 ] 00:12:30.308 [2024-11-29 15:54:41.634445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:30.566 [2024-11-29 15:54:41.773441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.566 [2024-11-29 15:54:41.773708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.566 [2024-11-29 15:54:41.773776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.133 15:54:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.133 15:54:42 -- common/autotest_common.sh@862 -- # return 0 00:12:31.133 15:54:42 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:31.133 I/O targets: 00:12:31.133 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:31.133 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:31.134 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:31.134 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:31.134 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:31.134 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:31.134 00:12:31.134 00:12:31.134 CUnit - A unit testing framework for C - Version 2.1-3 00:12:31.134 http://cunit.sourceforge.net/ 00:12:31.134 00:12:31.134 00:12:31.134 Suite: bdevio tests on: nvme3n1 00:12:31.134 Test: blockdev write read block ...passed 00:12:31.134 Test: blockdev write zeroes read block ...passed 00:12:31.134 Test: blockdev write zeroes read no split ...passed 00:12:31.134 Test: blockdev write zeroes read split ...passed 00:12:31.134 Test: blockdev write zeroes read split partial ...passed 00:12:31.134 Test: blockdev reset ...passed 00:12:31.134 Test: blockdev write read 8 blocks ...passed 00:12:31.134 Test: blockdev write read size > 128k ...passed 00:12:31.134 Test: blockdev write read invalid size ...passed 00:12:31.134 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.134 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.134 Test: blockdev write read max offset ...passed 00:12:31.134 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.134 Test: blockdev writev readv 8 blocks ...passed 00:12:31.134 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.134 Test: blockdev writev readv block ...passed 00:12:31.134 Test: blockdev writev readv size > 128k ...passed 00:12:31.134 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.134 Test: blockdev comparev and writev ...passed 00:12:31.134 Test: blockdev nvme passthru rw ...passed 00:12:31.134 Test: blockdev nvme passthru vendor specific ...passed 00:12:31.134 Test: blockdev nvme admin passthru ...passed 00:12:31.134 Test: blockdev copy ...passed 00:12:31.134 Suite: bdevio tests on: nvme2n1 00:12:31.134 Test: blockdev write read block ...passed 00:12:31.134 Test: blockdev write zeroes read block ...passed 00:12:31.134 Test: blockdev write zeroes read no split ...passed 00:12:31.134 Test: blockdev write zeroes read split ...passed 00:12:31.134 Test: blockdev write zeroes read split partial ...passed 00:12:31.134 Test: blockdev reset ...passed 00:12:31.134 Test: blockdev write read 8 blocks ...passed 00:12:31.134 Test: blockdev write read size > 128k ...passed 00:12:31.134 Test: blockdev write read invalid size ...passed 00:12:31.134 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.134 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.134 Test: blockdev write read max offset ...passed 00:12:31.134 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.134 Test: blockdev writev readv 8 blocks ...passed 00:12:31.134 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.134 Test: blockdev writev readv block ...passed 00:12:31.134 Test: blockdev writev readv size > 128k ...passed 00:12:31.134 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.134 Test: blockdev comparev and writev ...passed 00:12:31.134 Test: blockdev nvme passthru rw ...passed 00:12:31.134 Test: blockdev nvme passthru vendor specific ...passed 00:12:31.134 Test: blockdev nvme admin passthru ...passed 00:12:31.134 Test: blockdev copy ...passed 00:12:31.134 Suite: bdevio tests on: nvme1n3 00:12:31.134 Test: blockdev write read block ...passed 00:12:31.134 Test: blockdev write zeroes read block ...passed 00:12:31.134 Test: blockdev write zeroes read no split ...passed 00:12:31.134 Test: blockdev write zeroes read split ...passed 00:12:31.134 Test: blockdev write zeroes read split partial ...passed 00:12:31.134 Test: blockdev reset ...passed 00:12:31.134 Test: blockdev write read 8 blocks ...passed 00:12:31.134 Test: blockdev write read size > 128k ...passed 00:12:31.134 Test: blockdev write read invalid size ...passed 00:12:31.134 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.134 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.134 Test: blockdev write read max offset ...passed 00:12:31.134 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.134 Test: blockdev writev readv 8 blocks ...passed 00:12:31.134 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.134 Test: blockdev writev readv block ...passed 00:12:31.134 Test: blockdev writev readv size > 128k ...passed 00:12:31.134 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.134 Test: blockdev comparev and writev ...passed 00:12:31.134 Test: blockdev nvme passthru rw ...passed 00:12:31.134 Test: blockdev nvme passthru vendor specific ...passed 00:12:31.134 Test: blockdev nvme admin passthru ...passed 00:12:31.134 Test: blockdev copy ...passed 00:12:31.134 Suite: bdevio tests on: nvme1n2 00:12:31.134 Test: blockdev write read block ...passed 00:12:31.134 Test: blockdev write zeroes read block ...passed 00:12:31.134 Test: blockdev write zeroes read no split ...passed 00:12:31.134 Test: blockdev write zeroes read split ...passed 00:12:31.393 Test: blockdev write zeroes read split partial ...passed 00:12:31.393 Test: blockdev reset ...passed 00:12:31.393 Test: blockdev write read 8 blocks ...passed 00:12:31.393 Test: blockdev write read size > 128k ...passed 00:12:31.393 Test: blockdev write read invalid size ...passed 00:12:31.393 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.393 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.393 Test: blockdev write read max offset ...passed 00:12:31.393 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.393 Test: blockdev writev readv 8 blocks ...passed 00:12:31.393 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.393 Test: blockdev writev readv block ...passed 00:12:31.393 Test: blockdev writev readv size > 128k ...passed 00:12:31.393 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.393 Test: blockdev comparev and writev ...passed 00:12:31.393 Test: blockdev nvme passthru rw ...passed 00:12:31.393 Test: blockdev nvme passthru vendor specific ...passed 00:12:31.393 Test: blockdev nvme admin passthru ...passed 00:12:31.393 Test: blockdev copy ...passed 00:12:31.393 Suite: bdevio tests on: nvme1n1 00:12:31.393 Test: blockdev write read block ...passed 00:12:31.393 Test: blockdev write zeroes read block ...passed 00:12:31.393 Test: blockdev write zeroes read no split ...passed 00:12:31.393 Test: blockdev write zeroes read split ...passed 00:12:31.393 Test: blockdev write zeroes read split partial ...passed 00:12:31.393 Test: blockdev reset ...passed 00:12:31.393 Test: blockdev write read 8 blocks ...passed 00:12:31.393 Test: blockdev write read size > 128k ...passed 00:12:31.393 Test: blockdev write read invalid size ...passed 00:12:31.393 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.393 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.393 Test: blockdev write read max offset ...passed 00:12:31.393 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.393 Test: blockdev writev readv 8 blocks ...passed 00:12:31.393 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.393 Test: blockdev writev readv block ...passed 00:12:31.393 Test: blockdev writev readv size > 128k ...passed 00:12:31.393 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.393 Test: blockdev comparev and writev ...passed 00:12:31.393 Test: blockdev nvme passthru rw ...passed 00:12:31.393 Test: blockdev nvme passthru vendor specific ...passed 00:12:31.393 Test: blockdev nvme admin passthru ...passed 00:12:31.393 Test: blockdev copy ...passed 00:12:31.393 Suite: bdevio tests on: nvme0n1 00:12:31.393 Test: blockdev write read block ...passed 00:12:31.393 Test: blockdev write zeroes read block ...passed 00:12:31.393 Test: blockdev write zeroes read no split ...passed 00:12:31.393 Test: blockdev write zeroes read split ...passed 00:12:31.393 Test: blockdev write zeroes read split partial ...passed 00:12:31.393 Test: blockdev reset ...passed 00:12:31.393 Test: blockdev write read 8 blocks ...passed 00:12:31.393 Test: blockdev write read size > 128k ...passed 00:12:31.393 Test: blockdev write read invalid size ...passed 00:12:31.393 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.393 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.393 Test: blockdev write read max offset ...passed 00:12:31.393 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.393 Test: blockdev writev readv 8 blocks ...passed 00:12:31.393 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.393 Test: blockdev writev readv block ...passed 00:12:31.393 Test: blockdev writev readv size > 128k ...passed 00:12:31.393 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.393 Test: blockdev comparev and writev ...passed 00:12:31.393 Test: blockdev nvme passthru rw ...passed 00:12:31.393 Test: blockdev nvme passthru vendor specific ...passed 00:12:31.393 Test: blockdev nvme admin passthru ...passed 00:12:31.393 Test: blockdev copy ...passed 00:12:31.393 00:12:31.393 Run Summary: Type Total Ran Passed Failed Inactive 00:12:31.393 suites 6 6 n/a 0 0 00:12:31.393 tests 138 138 138 0 0 00:12:31.393 asserts 780 780 780 0 n/a 00:12:31.393 00:12:31.393 Elapsed time = 0.915 seconds 00:12:31.393 0 00:12:31.393 15:54:42 -- bdev/blockdev.sh@293 -- # killprocess 67657 00:12:31.393 15:54:42 -- common/autotest_common.sh@936 -- # '[' -z 67657 ']' 00:12:31.393 15:54:42 -- common/autotest_common.sh@940 -- # kill -0 67657 00:12:31.393 15:54:42 -- common/autotest_common.sh@941 -- # uname 00:12:31.393 15:54:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:31.393 15:54:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67657 00:12:31.393 killing process with pid 67657 00:12:31.393 15:54:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:31.393 15:54:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:31.393 15:54:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67657' 00:12:31.393 15:54:42 -- common/autotest_common.sh@955 -- # kill 67657 00:12:31.393 15:54:42 -- common/autotest_common.sh@960 -- # wait 67657 00:12:31.960 15:54:43 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:31.960 00:12:31.960 real 0m1.907s 00:12:31.960 user 0m4.528s 00:12:31.960 sys 0m0.262s 00:12:31.960 15:54:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:31.960 ************************************ 00:12:31.960 END TEST bdev_bounds 00:12:31.960 ************************************ 00:12:31.960 15:54:43 -- common/autotest_common.sh@10 -- # set +x 00:12:31.960 15:54:43 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:31.960 15:54:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:31.960 15:54:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.960 15:54:43 -- common/autotest_common.sh@10 -- # set +x 00:12:32.219 ************************************ 00:12:32.219 START TEST bdev_nbd 00:12:32.219 ************************************ 00:12:32.219 15:54:43 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:32.219 15:54:43 -- bdev/blockdev.sh@298 -- # uname -s 00:12:32.219 15:54:43 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:32.219 15:54:43 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.219 15:54:43 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:32.219 15:54:43 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:32.219 15:54:43 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:32.219 15:54:43 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:12:32.219 15:54:43 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:32.219 15:54:43 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:32.219 15:54:43 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:32.219 15:54:43 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:12:32.219 15:54:43 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:32.219 15:54:43 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:32.219 15:54:43 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:32.219 15:54:43 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:32.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:32.219 15:54:43 -- bdev/blockdev.sh@316 -- # nbd_pid=67712 00:12:32.219 15:54:43 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:32.219 15:54:43 -- bdev/blockdev.sh@318 -- # waitforlisten 67712 /var/tmp/spdk-nbd.sock 00:12:32.219 15:54:43 -- common/autotest_common.sh@829 -- # '[' -z 67712 ']' 00:12:32.219 15:54:43 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:32.219 15:54:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:32.219 15:54:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.219 15:54:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:32.219 15:54:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.219 15:54:43 -- common/autotest_common.sh@10 -- # set +x 00:12:32.219 [2024-11-29 15:54:43.469440] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:32.219 [2024-11-29 15:54:43.469752] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.219 [2024-11-29 15:54:43.624594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.477 [2024-11-29 15:54:43.775938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.043 15:54:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.043 15:54:44 -- common/autotest_common.sh@862 -- # return 0 00:12:33.043 15:54:44 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:33.043 15:54:44 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.043 15:54:44 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:33.043 15:54:44 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:33.043 15:54:44 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:33.043 15:54:44 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.043 15:54:44 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:33.043 15:54:44 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:33.043 15:54:44 -- bdev/nbd_common.sh@24 -- # local i 00:12:33.043 15:54:44 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:33.043 15:54:44 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:33.043 15:54:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:33.043 15:54:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:33.300 15:54:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:33.301 15:54:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:33.301 15:54:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:33.301 15:54:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:33.301 15:54:44 -- common/autotest_common.sh@867 -- # local i 00:12:33.301 15:54:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:33.301 15:54:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:33.301 15:54:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:33.301 15:54:44 -- common/autotest_common.sh@871 -- # break 00:12:33.301 15:54:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:33.301 15:54:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:33.301 15:54:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.301 1+0 records in 00:12:33.301 1+0 records out 00:12:33.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490287 s, 8.4 MB/s 00:12:33.301 15:54:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.301 15:54:44 -- common/autotest_common.sh@884 -- # size=4096 00:12:33.301 15:54:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.301 15:54:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:33.301 15:54:44 -- common/autotest_common.sh@887 -- # return 0 00:12:33.301 15:54:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:33.301 15:54:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:33.301 15:54:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:33.301 15:54:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:33.301 15:54:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:33.301 15:54:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:33.301 15:54:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:33.301 15:54:44 -- common/autotest_common.sh@867 -- # local i 00:12:33.301 15:54:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:33.301 15:54:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:33.301 15:54:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:33.301 15:54:44 -- common/autotest_common.sh@871 -- # break 00:12:33.301 15:54:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:33.301 15:54:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:33.301 15:54:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.301 1+0 records in 00:12:33.301 1+0 records out 00:12:33.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456902 s, 9.0 MB/s 00:12:33.301 15:54:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.301 15:54:44 -- common/autotest_common.sh@884 -- # size=4096 00:12:33.301 15:54:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.301 15:54:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:33.301 15:54:44 -- common/autotest_common.sh@887 -- # return 0 00:12:33.301 15:54:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:33.301 15:54:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:33.301 15:54:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:12:33.559 15:54:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:33.559 15:54:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:33.559 15:54:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:33.559 15:54:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:33.559 15:54:44 -- common/autotest_common.sh@867 -- # local i 00:12:33.559 15:54:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:33.559 15:54:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:33.559 15:54:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:33.559 15:54:44 -- common/autotest_common.sh@871 -- # break 00:12:33.559 15:54:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:33.559 15:54:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:33.559 15:54:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.559 1+0 records in 00:12:33.559 1+0 records out 00:12:33.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391357 s, 10.5 MB/s 00:12:33.559 15:54:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.559 15:54:44 -- common/autotest_common.sh@884 -- # size=4096 00:12:33.559 15:54:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.559 15:54:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:33.559 15:54:44 -- common/autotest_common.sh@887 -- # return 0 00:12:33.559 15:54:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:33.559 15:54:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:33.559 15:54:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:12:33.816 15:54:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:33.816 15:54:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:33.816 15:54:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:33.816 15:54:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:33.816 15:54:45 -- common/autotest_common.sh@867 -- # local i 00:12:33.816 15:54:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:33.816 15:54:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:33.816 15:54:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:33.816 15:54:45 -- common/autotest_common.sh@871 -- # break 00:12:33.816 15:54:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:33.816 15:54:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:33.816 15:54:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.816 1+0 records in 00:12:33.816 1+0 records out 00:12:33.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301466 s, 13.6 MB/s 00:12:33.816 15:54:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.816 15:54:45 -- common/autotest_common.sh@884 -- # size=4096 00:12:33.816 15:54:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.816 15:54:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:33.816 15:54:45 -- common/autotest_common.sh@887 -- # return 0 00:12:33.816 15:54:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:33.816 15:54:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:33.816 15:54:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:34.074 15:54:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:34.074 15:54:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:34.074 15:54:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:34.074 15:54:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:34.074 15:54:45 -- common/autotest_common.sh@867 -- # local i 00:12:34.074 15:54:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:34.074 15:54:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:34.074 15:54:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:34.074 15:54:45 -- common/autotest_common.sh@871 -- # break 00:12:34.074 15:54:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:34.074 15:54:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:34.074 15:54:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.074 1+0 records in 00:12:34.074 1+0 records out 00:12:34.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410805 s, 10.0 MB/s 00:12:34.074 15:54:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.074 15:54:45 -- common/autotest_common.sh@884 -- # size=4096 00:12:34.074 15:54:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.074 15:54:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:34.074 15:54:45 -- common/autotest_common.sh@887 -- # return 0 00:12:34.074 15:54:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:34.074 15:54:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:34.074 15:54:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:34.335 15:54:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:34.335 15:54:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:34.335 15:54:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:34.335 15:54:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:34.335 15:54:45 -- common/autotest_common.sh@867 -- # local i 00:12:34.335 15:54:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:34.335 15:54:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:34.335 15:54:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:34.335 15:54:45 -- common/autotest_common.sh@871 -- # break 00:12:34.335 15:54:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:34.335 15:54:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:34.335 15:54:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.335 1+0 records in 00:12:34.335 1+0 records out 00:12:34.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011907 s, 3.4 MB/s 00:12:34.335 15:54:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.335 15:54:45 -- common/autotest_common.sh@884 -- # size=4096 00:12:34.335 15:54:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.335 15:54:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:34.335 15:54:45 -- common/autotest_common.sh@887 -- # return 0 00:12:34.335 15:54:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:34.335 15:54:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:34.335 15:54:45 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:34.335 15:54:45 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:34.335 { 00:12:34.335 "nbd_device": "/dev/nbd0", 00:12:34.335 "bdev_name": "nvme0n1" 00:12:34.335 }, 00:12:34.335 { 00:12:34.335 "nbd_device": "/dev/nbd1", 00:12:34.335 "bdev_name": "nvme1n1" 00:12:34.335 }, 00:12:34.335 { 00:12:34.335 "nbd_device": "/dev/nbd2", 00:12:34.335 "bdev_name": "nvme1n2" 00:12:34.335 }, 00:12:34.335 { 00:12:34.335 "nbd_device": "/dev/nbd3", 00:12:34.335 "bdev_name": "nvme1n3" 00:12:34.335 }, 00:12:34.335 { 00:12:34.335 "nbd_device": "/dev/nbd4", 00:12:34.335 "bdev_name": "nvme2n1" 00:12:34.335 }, 00:12:34.335 { 00:12:34.335 "nbd_device": "/dev/nbd5", 00:12:34.335 "bdev_name": "nvme3n1" 00:12:34.335 } 00:12:34.335 ]' 00:12:34.335 15:54:45 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:34.597 15:54:45 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:34.597 { 00:12:34.597 "nbd_device": "/dev/nbd0", 00:12:34.597 "bdev_name": "nvme0n1" 00:12:34.597 }, 00:12:34.597 { 00:12:34.597 "nbd_device": "/dev/nbd1", 00:12:34.597 "bdev_name": "nvme1n1" 00:12:34.597 }, 00:12:34.597 { 00:12:34.597 "nbd_device": "/dev/nbd2", 00:12:34.597 "bdev_name": "nvme1n2" 00:12:34.597 }, 00:12:34.597 { 00:12:34.597 "nbd_device": "/dev/nbd3", 00:12:34.597 "bdev_name": "nvme1n3" 00:12:34.597 }, 00:12:34.597 { 00:12:34.597 "nbd_device": "/dev/nbd4", 00:12:34.597 "bdev_name": "nvme2n1" 00:12:34.597 }, 00:12:34.597 { 00:12:34.597 "nbd_device": "/dev/nbd5", 00:12:34.597 "bdev_name": "nvme3n1" 00:12:34.597 } 00:12:34.597 ]' 00:12:34.597 15:54:45 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:34.597 15:54:45 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:34.597 15:54:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.597 15:54:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:34.597 15:54:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:34.597 15:54:45 -- bdev/nbd_common.sh@51 -- # local i 00:12:34.597 15:54:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.597 15:54:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:34.597 15:54:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:34.597 15:54:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:34.597 15:54:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:34.597 15:54:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.597 15:54:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.597 15:54:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:34.597 15:54:46 -- bdev/nbd_common.sh@41 -- # break 00:12:34.597 15:54:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.597 15:54:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.597 15:54:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:34.856 15:54:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:34.856 15:54:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:34.856 15:54:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:34.856 15:54:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.856 15:54:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.857 15:54:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:34.857 15:54:46 -- bdev/nbd_common.sh@41 -- # break 00:12:34.857 15:54:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.857 15:54:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.857 15:54:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:35.115 15:54:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:35.115 15:54:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:35.115 15:54:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:35.115 15:54:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.115 15:54:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.115 15:54:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:35.115 15:54:46 -- bdev/nbd_common.sh@41 -- # break 00:12:35.115 15:54:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.115 15:54:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.115 15:54:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:35.373 15:54:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:35.373 15:54:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:35.373 15:54:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:35.373 15:54:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.373 15:54:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.373 15:54:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:35.373 15:54:46 -- bdev/nbd_common.sh@41 -- # break 00:12:35.373 15:54:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.373 15:54:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.373 15:54:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:35.632 15:54:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:35.632 15:54:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:35.632 15:54:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:35.632 15:54:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.632 15:54:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.632 15:54:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:35.632 15:54:46 -- bdev/nbd_common.sh@41 -- # break 00:12:35.632 15:54:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.632 15:54:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.632 15:54:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:35.632 15:54:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:35.632 15:54:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:35.632 15:54:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:35.632 15:54:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.632 15:54:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.632 15:54:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:35.632 15:54:47 -- bdev/nbd_common.sh@41 -- # break 00:12:35.632 15:54:47 -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.632 15:54:47 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:35.632 15:54:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.632 15:54:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@65 -- # true 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@65 -- # count=0 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@122 -- # count=0 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@127 -- # return 0 00:12:35.891 15:54:47 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@12 -- # local i 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:35.891 15:54:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:36.150 /dev/nbd0 00:12:36.150 15:54:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:36.150 15:54:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:36.150 15:54:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:36.150 15:54:47 -- common/autotest_common.sh@867 -- # local i 00:12:36.150 15:54:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:36.150 15:54:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:36.150 15:54:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:36.150 15:54:47 -- common/autotest_common.sh@871 -- # break 00:12:36.150 15:54:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:36.150 15:54:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:36.150 15:54:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.150 1+0 records in 00:12:36.150 1+0 records out 00:12:36.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440466 s, 9.3 MB/s 00:12:36.150 15:54:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.151 15:54:47 -- common/autotest_common.sh@884 -- # size=4096 00:12:36.151 15:54:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.151 15:54:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:36.151 15:54:47 -- common/autotest_common.sh@887 -- # return 0 00:12:36.151 15:54:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.151 15:54:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:36.151 15:54:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:36.409 /dev/nbd1 00:12:36.409 15:54:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:36.409 15:54:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:36.409 15:54:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:36.409 15:54:47 -- common/autotest_common.sh@867 -- # local i 00:12:36.409 15:54:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:36.409 15:54:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:36.409 15:54:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:36.409 15:54:47 -- common/autotest_common.sh@871 -- # break 00:12:36.409 15:54:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:36.409 15:54:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:36.409 15:54:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.410 1+0 records in 00:12:36.410 1+0 records out 00:12:36.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448002 s, 9.1 MB/s 00:12:36.410 15:54:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.410 15:54:47 -- common/autotest_common.sh@884 -- # size=4096 00:12:36.410 15:54:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.410 15:54:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:36.410 15:54:47 -- common/autotest_common.sh@887 -- # return 0 00:12:36.410 15:54:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.410 15:54:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:36.410 15:54:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:12:36.669 /dev/nbd10 00:12:36.669 15:54:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:36.669 15:54:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:36.669 15:54:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:36.669 15:54:47 -- common/autotest_common.sh@867 -- # local i 00:12:36.669 15:54:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:36.669 15:54:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:36.669 15:54:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:36.669 15:54:47 -- common/autotest_common.sh@871 -- # break 00:12:36.669 15:54:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:36.669 15:54:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:36.669 15:54:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.669 1+0 records in 00:12:36.669 1+0 records out 00:12:36.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676259 s, 6.1 MB/s 00:12:36.669 15:54:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.669 15:54:47 -- common/autotest_common.sh@884 -- # size=4096 00:12:36.669 15:54:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.669 15:54:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:36.669 15:54:47 -- common/autotest_common.sh@887 -- # return 0 00:12:36.669 15:54:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.669 15:54:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:36.669 15:54:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:12:36.669 /dev/nbd11 00:12:36.929 15:54:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:36.929 15:54:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:36.929 15:54:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:36.929 15:54:48 -- common/autotest_common.sh@867 -- # local i 00:12:36.929 15:54:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:36.929 15:54:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:36.929 15:54:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:36.929 15:54:48 -- common/autotest_common.sh@871 -- # break 00:12:36.929 15:54:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:36.929 15:54:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:36.929 15:54:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.929 1+0 records in 00:12:36.929 1+0 records out 00:12:36.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694148 s, 5.9 MB/s 00:12:36.929 15:54:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.929 15:54:48 -- common/autotest_common.sh@884 -- # size=4096 00:12:36.929 15:54:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.929 15:54:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:36.929 15:54:48 -- common/autotest_common.sh@887 -- # return 0 00:12:36.929 15:54:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.929 15:54:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:36.929 15:54:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:12:36.929 /dev/nbd12 00:12:36.929 15:54:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:36.929 15:54:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:36.929 15:54:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:36.929 15:54:48 -- common/autotest_common.sh@867 -- # local i 00:12:36.929 15:54:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:36.929 15:54:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:36.929 15:54:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:36.929 15:54:48 -- common/autotest_common.sh@871 -- # break 00:12:36.929 15:54:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:36.929 15:54:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:36.929 15:54:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.929 1+0 records in 00:12:36.929 1+0 records out 00:12:36.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308972 s, 13.3 MB/s 00:12:36.929 15:54:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.929 15:54:48 -- common/autotest_common.sh@884 -- # size=4096 00:12:36.929 15:54:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.929 15:54:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:36.929 15:54:48 -- common/autotest_common.sh@887 -- # return 0 00:12:36.929 15:54:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.929 15:54:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:36.929 15:54:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:37.188 /dev/nbd13 00:12:37.188 15:54:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:37.188 15:54:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:37.188 15:54:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:37.188 15:54:48 -- common/autotest_common.sh@867 -- # local i 00:12:37.188 15:54:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:37.189 15:54:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:37.189 15:54:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:37.189 15:54:48 -- common/autotest_common.sh@871 -- # break 00:12:37.189 15:54:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:37.189 15:54:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:37.189 15:54:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.189 1+0 records in 00:12:37.189 1+0 records out 00:12:37.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000842809 s, 4.9 MB/s 00:12:37.189 15:54:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.189 15:54:48 -- common/autotest_common.sh@884 -- # size=4096 00:12:37.189 15:54:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.189 15:54:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:37.189 15:54:48 -- common/autotest_common.sh@887 -- # return 0 00:12:37.189 15:54:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:37.189 15:54:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:37.189 15:54:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:37.189 15:54:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.189 15:54:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:37.449 15:54:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:37.449 { 00:12:37.449 "nbd_device": "/dev/nbd0", 00:12:37.449 "bdev_name": "nvme0n1" 00:12:37.449 }, 00:12:37.449 { 00:12:37.449 "nbd_device": "/dev/nbd1", 00:12:37.449 "bdev_name": "nvme1n1" 00:12:37.449 }, 00:12:37.450 { 00:12:37.450 "nbd_device": "/dev/nbd10", 00:12:37.450 "bdev_name": "nvme1n2" 00:12:37.450 }, 00:12:37.450 { 00:12:37.450 "nbd_device": "/dev/nbd11", 00:12:37.450 "bdev_name": "nvme1n3" 00:12:37.450 }, 00:12:37.450 { 00:12:37.450 "nbd_device": "/dev/nbd12", 00:12:37.450 "bdev_name": "nvme2n1" 00:12:37.450 }, 00:12:37.450 { 00:12:37.450 "nbd_device": "/dev/nbd13", 00:12:37.450 "bdev_name": "nvme3n1" 00:12:37.450 } 00:12:37.450 ]' 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:37.450 { 00:12:37.450 "nbd_device": "/dev/nbd0", 00:12:37.450 "bdev_name": "nvme0n1" 00:12:37.450 }, 00:12:37.450 { 00:12:37.450 "nbd_device": "/dev/nbd1", 00:12:37.450 "bdev_name": "nvme1n1" 00:12:37.450 }, 00:12:37.450 { 00:12:37.450 "nbd_device": "/dev/nbd10", 00:12:37.450 "bdev_name": "nvme1n2" 00:12:37.450 }, 00:12:37.450 { 00:12:37.450 "nbd_device": "/dev/nbd11", 00:12:37.450 "bdev_name": "nvme1n3" 00:12:37.450 }, 00:12:37.450 { 00:12:37.450 "nbd_device": "/dev/nbd12", 00:12:37.450 "bdev_name": "nvme2n1" 00:12:37.450 }, 00:12:37.450 { 00:12:37.450 "nbd_device": "/dev/nbd13", 00:12:37.450 "bdev_name": "nvme3n1" 00:12:37.450 } 00:12:37.450 ]' 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:37.450 /dev/nbd1 00:12:37.450 /dev/nbd10 00:12:37.450 /dev/nbd11 00:12:37.450 /dev/nbd12 00:12:37.450 /dev/nbd13' 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:37.450 /dev/nbd1 00:12:37.450 /dev/nbd10 00:12:37.450 /dev/nbd11 00:12:37.450 /dev/nbd12 00:12:37.450 /dev/nbd13' 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@65 -- # count=6 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@66 -- # echo 6 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@95 -- # count=6 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:37.450 256+0 records in 00:12:37.450 256+0 records out 00:12:37.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00796587 s, 132 MB/s 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:37.450 15:54:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:37.712 256+0 records in 00:12:37.712 256+0 records out 00:12:37.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.216138 s, 4.9 MB/s 00:12:37.712 15:54:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:37.712 15:54:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:37.974 256+0 records in 00:12:37.974 256+0 records out 00:12:37.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239613 s, 4.4 MB/s 00:12:37.974 15:54:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:37.974 15:54:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:38.236 256+0 records in 00:12:38.236 256+0 records out 00:12:38.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.250171 s, 4.2 MB/s 00:12:38.236 15:54:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:38.236 15:54:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:38.496 256+0 records in 00:12:38.496 256+0 records out 00:12:38.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.21287 s, 4.9 MB/s 00:12:38.496 15:54:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:38.496 15:54:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:38.496 256+0 records in 00:12:38.496 256+0 records out 00:12:38.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161609 s, 6.5 MB/s 00:12:38.496 15:54:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:38.496 15:54:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:38.754 256+0 records in 00:12:38.754 256+0 records out 00:12:38.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.19249 s, 5.4 MB/s 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@51 -- # local i 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.754 15:54:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:39.013 15:54:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:39.013 15:54:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:39.013 15:54:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:39.013 15:54:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.013 15:54:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.013 15:54:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:39.013 15:54:50 -- bdev/nbd_common.sh@41 -- # break 00:12:39.013 15:54:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.013 15:54:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.013 15:54:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:39.271 15:54:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:39.271 15:54:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:39.271 15:54:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:39.271 15:54:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.271 15:54:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.271 15:54:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:39.271 15:54:50 -- bdev/nbd_common.sh@41 -- # break 00:12:39.272 15:54:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.272 15:54:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.272 15:54:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:39.272 15:54:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:39.272 15:54:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:39.272 15:54:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:39.272 15:54:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.272 15:54:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.272 15:54:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:39.530 15:54:50 -- bdev/nbd_common.sh@41 -- # break 00:12:39.530 15:54:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.530 15:54:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.530 15:54:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:39.530 15:54:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:39.530 15:54:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:39.530 15:54:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:39.530 15:54:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.530 15:54:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.530 15:54:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:39.530 15:54:50 -- bdev/nbd_common.sh@41 -- # break 00:12:39.530 15:54:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.530 15:54:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.530 15:54:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:39.789 15:54:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:39.789 15:54:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:39.789 15:54:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:39.789 15:54:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.789 15:54:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.789 15:54:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:39.789 15:54:51 -- bdev/nbd_common.sh@41 -- # break 00:12:39.789 15:54:51 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.789 15:54:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.789 15:54:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:40.048 15:54:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:40.048 15:54:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:40.048 15:54:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:40.048 15:54:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.048 15:54:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.048 15:54:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:40.048 15:54:51 -- bdev/nbd_common.sh@41 -- # break 00:12:40.048 15:54:51 -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.048 15:54:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:40.048 15:54:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.048 15:54:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:40.048 15:54:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:40.048 15:54:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:40.048 15:54:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@65 -- # true 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@65 -- # count=0 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@104 -- # count=0 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@109 -- # return 0 00:12:40.306 15:54:51 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:40.306 malloc_lvol_verify 00:12:40.306 15:54:51 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:40.565 23ec7f9c-3a44-4533-b58f-6e9cb9d478f2 00:12:40.565 15:54:51 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:40.823 f577070a-c69b-4141-b4b9-52514b6ba57b 00:12:40.823 15:54:52 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:41.082 /dev/nbd0 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:41.082 mke2fs 1.47.0 (5-Feb-2023) 00:12:41.082 Discarding device blocks: 0/4096 done 00:12:41.082 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:41.082 00:12:41.082 Allocating group tables: 0/1 done 00:12:41.082 Writing inode tables: 0/1 done 00:12:41.082 Creating journal (1024 blocks): done 00:12:41.082 Writing superblocks and filesystem accounting information: 0/1 done 00:12:41.082 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@51 -- # local i 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@41 -- # break 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:41.082 15:54:52 -- bdev/nbd_common.sh@147 -- # return 0 00:12:41.082 15:54:52 -- bdev/blockdev.sh@324 -- # killprocess 67712 00:12:41.082 15:54:52 -- common/autotest_common.sh@936 -- # '[' -z 67712 ']' 00:12:41.082 15:54:52 -- common/autotest_common.sh@940 -- # kill -0 67712 00:12:41.082 15:54:52 -- common/autotest_common.sh@941 -- # uname 00:12:41.082 15:54:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:41.082 15:54:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67712 00:12:41.082 killing process with pid 67712 00:12:41.082 15:54:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:41.082 15:54:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:41.082 15:54:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67712' 00:12:41.082 15:54:52 -- common/autotest_common.sh@955 -- # kill 67712 00:12:41.082 15:54:52 -- common/autotest_common.sh@960 -- # wait 67712 00:12:42.023 15:54:53 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:42.023 00:12:42.023 real 0m9.771s 00:12:42.023 user 0m13.267s 00:12:42.023 sys 0m3.265s 00:12:42.023 15:54:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:42.023 15:54:53 -- common/autotest_common.sh@10 -- # set +x 00:12:42.023 ************************************ 00:12:42.023 END TEST bdev_nbd 00:12:42.023 ************************************ 00:12:42.023 15:54:53 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:42.023 15:54:53 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:12:42.023 15:54:53 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:12:42.023 15:54:53 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:42.023 15:54:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:42.023 15:54:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:42.023 15:54:53 -- common/autotest_common.sh@10 -- # set +x 00:12:42.023 ************************************ 00:12:42.023 START TEST bdev_fio 00:12:42.023 ************************************ 00:12:42.023 15:54:53 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:42.023 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:42.023 15:54:53 -- bdev/blockdev.sh@329 -- # local env_context 00:12:42.023 15:54:53 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:42.023 15:54:53 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:42.023 15:54:53 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:42.023 15:54:53 -- bdev/blockdev.sh@337 -- # echo '' 00:12:42.023 15:54:53 -- bdev/blockdev.sh@337 -- # env_context= 00:12:42.023 15:54:53 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:42.023 15:54:53 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:42.023 15:54:53 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:42.023 15:54:53 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:42.023 15:54:53 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:42.023 15:54:53 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:42.023 15:54:53 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:42.023 15:54:53 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:42.023 15:54:53 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:42.023 15:54:53 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:42.023 15:54:53 -- common/autotest_common.sh@1290 -- # cat 00:12:42.023 15:54:53 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:42.023 15:54:53 -- common/autotest_common.sh@1303 -- # cat 00:12:42.023 15:54:53 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:42.023 15:54:53 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:42.023 15:54:53 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:42.023 15:54:53 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:42.023 15:54:53 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:42.023 15:54:53 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:12:42.023 15:54:53 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:12:42.023 15:54:53 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:42.023 15:54:53 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:12:42.023 15:54:53 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:12:42.023 15:54:53 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:42.023 15:54:53 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:12:42.023 15:54:53 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:12:42.023 15:54:53 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:42.023 15:54:53 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:12:42.023 15:54:53 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:12:42.023 15:54:53 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:42.023 15:54:53 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:12:42.023 15:54:53 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:12:42.023 15:54:53 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:42.023 15:54:53 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:12:42.023 15:54:53 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:12:42.023 15:54:53 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:42.023 15:54:53 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:42.023 15:54:53 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:42.023 15:54:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:42.023 15:54:53 -- common/autotest_common.sh@10 -- # set +x 00:12:42.023 ************************************ 00:12:42.023 START TEST bdev_fio_rw_verify 00:12:42.023 ************************************ 00:12:42.023 15:54:53 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:42.023 15:54:53 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:42.023 15:54:53 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:42.023 15:54:53 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:42.023 15:54:53 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:42.023 15:54:53 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:42.023 15:54:53 -- common/autotest_common.sh@1330 -- # shift 00:12:42.023 15:54:53 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:42.023 15:54:53 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:42.023 15:54:53 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:42.023 15:54:53 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:42.023 15:54:53 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:42.023 15:54:53 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:42.023 15:54:53 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:42.023 15:54:53 -- common/autotest_common.sh@1336 -- # break 00:12:42.023 15:54:53 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:42.023 15:54:53 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:42.283 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:42.284 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:42.284 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:42.284 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:42.284 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:42.284 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:42.284 fio-3.35 00:12:42.284 Starting 6 threads 00:12:54.526 00:12:54.526 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=68107: Fri Nov 29 15:55:04 2024 00:12:54.526 read: IOPS=14.1k, BW=55.1MiB/s (57.8MB/s)(551MiB/10005msec) 00:12:54.526 slat (usec): min=2, max=2631, avg= 6.87, stdev=17.53 00:12:54.526 clat (usec): min=91, max=524696, avg=1439.15, stdev=5071.40 00:12:54.526 lat (usec): min=96, max=524701, avg=1446.02, stdev=5071.51 00:12:54.526 clat percentiles (usec): 00:12:54.526 | 50.000th=[ 1270], 99.000th=[ 4015], 99.900th=[ 5604], 00:12:54.526 | 99.990th=[408945], 99.999th=[526386] 00:12:54.526 write: IOPS=14.3k, BW=55.9MiB/s (58.6MB/s)(559MiB/10005msec); 0 zone resets 00:12:54.526 slat (usec): min=12, max=3506, avg=42.24, stdev=144.23 00:12:54.526 clat (usec): min=89, max=8688, avg=1607.85, stdev=873.60 00:12:54.526 lat (usec): min=105, max=9737, avg=1650.08, stdev=886.23 00:12:54.526 clat percentiles (usec): 00:12:54.526 | 50.000th=[ 1483], 99.000th=[ 4293], 99.900th=[ 5866], 99.990th=[ 7701], 00:12:54.526 | 99.999th=[ 8717] 00:12:54.526 bw ( KiB/s): min=40685, max=97088, per=100.00%, avg=57810.38, stdev=2615.70, samples=113 00:12:54.526 iops : min=10169, max=24272, avg=14451.72, stdev=653.95, samples=113 00:12:54.526 lat (usec) : 100=0.01%, 250=2.54%, 500=7.62%, 750=9.11%, 1000=11.35% 00:12:54.526 lat (msec) : 2=45.70%, 4=22.35%, 10=1.30%, 50=0.01%, 500=0.01% 00:12:54.526 lat (msec) : 750=0.01% 00:12:54.526 cpu : usr=45.24%, sys=31.39%, ctx=5501, majf=0, minf=16155 00:12:54.526 IO depths : 1=11.2%, 2=23.6%, 4=51.3%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:54.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:54.526 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:54.526 issued rwts: total=141084,143107,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:54.526 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:54.526 00:12:54.526 Run status group 0 (all jobs): 00:12:54.526 READ: bw=55.1MiB/s (57.8MB/s), 55.1MiB/s-55.1MiB/s (57.8MB/s-57.8MB/s), io=551MiB (578MB), run=10005-10005msec 00:12:54.526 WRITE: bw=55.9MiB/s (58.6MB/s), 55.9MiB/s-55.9MiB/s (58.6MB/s-58.6MB/s), io=559MiB (586MB), run=10005-10005msec 00:12:54.526 ----------------------------------------------------- 00:12:54.526 Suppressions used: 00:12:54.526 count bytes template 00:12:54.526 6 48 /usr/src/fio/parse.c 00:12:54.526 1941 186336 /usr/src/fio/iolog.c 00:12:54.526 1 8 libtcmalloc_minimal.so 00:12:54.526 1 904 libcrypto.so 00:12:54.526 ----------------------------------------------------- 00:12:54.526 00:12:54.526 00:12:54.526 real 0m11.980s 00:12:54.526 user 0m28.727s 00:12:54.526 sys 0m19.270s 00:12:54.526 15:55:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:54.526 15:55:05 -- common/autotest_common.sh@10 -- # set +x 00:12:54.527 ************************************ 00:12:54.527 END TEST bdev_fio_rw_verify 00:12:54.527 ************************************ 00:12:54.527 15:55:05 -- bdev/blockdev.sh@348 -- # rm -f 00:12:54.527 15:55:05 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:54.527 15:55:05 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:54.527 15:55:05 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:54.527 15:55:05 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:54.527 15:55:05 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:54.527 15:55:05 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:54.527 15:55:05 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:54.527 15:55:05 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:54.527 15:55:05 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:54.527 15:55:05 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:54.527 15:55:05 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:54.527 15:55:05 -- common/autotest_common.sh@1290 -- # cat 00:12:54.527 15:55:05 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:54.527 15:55:05 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:54.527 15:55:05 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:54.527 15:55:05 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "176b26a5-111e-42cb-b922-ab343c0138be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "176b26a5-111e-42cb-b922-ab343c0138be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "4269f46c-65e0-443c-bb1e-9d5f2ccad513"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4269f46c-65e0-443c-bb1e-9d5f2ccad513",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "fd1775ec-2511-457c-8cfe-d1f598d6f560"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fd1775ec-2511-457c-8cfe-d1f598d6f560",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "1ebc0894-1340-4cbf-86bd-ed96910e442a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1ebc0894-1340-4cbf-86bd-ed96910e442a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "146232ea-02a0-4d04-b38f-ea8c477d5cfa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "146232ea-02a0-4d04-b38f-ea8c477d5cfa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "7b445d27-eb2e-4062-992d-7c23de4c0cd7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7b445d27-eb2e-4062-992d-7c23de4c0cd7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:54.527 15:55:05 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:54.527 15:55:05 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:12:54.527 15:55:05 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:54.527 /home/vagrant/spdk_repo/spdk 00:12:54.527 15:55:05 -- bdev/blockdev.sh@360 -- # popd 00:12:54.527 15:55:05 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:12:54.527 15:55:05 -- bdev/blockdev.sh@362 -- # return 0 00:12:54.527 00:12:54.527 real 0m12.155s 00:12:54.527 user 0m28.797s 00:12:54.527 sys 0m19.350s 00:12:54.527 15:55:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:54.527 ************************************ 00:12:54.527 END TEST bdev_fio 00:12:54.527 ************************************ 00:12:54.527 15:55:05 -- common/autotest_common.sh@10 -- # set +x 00:12:54.527 15:55:05 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:54.527 15:55:05 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:54.527 15:55:05 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:54.527 15:55:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:54.527 15:55:05 -- common/autotest_common.sh@10 -- # set +x 00:12:54.527 ************************************ 00:12:54.527 START TEST bdev_verify 00:12:54.527 ************************************ 00:12:54.527 15:55:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:54.527 [2024-11-29 15:55:05.532816] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:54.527 [2024-11-29 15:55:05.532987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68285 ] 00:12:54.527 [2024-11-29 15:55:05.688323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:54.527 [2024-11-29 15:55:05.908769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.527 [2024-11-29 15:55:05.908885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.100 Running I/O for 5 seconds... 00:13:00.468 00:13:00.468 Latency(us) 00:13:00.468 [2024-11-29T15:55:11.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.468 [2024-11-29T15:55:11.899Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:00.468 Verification LBA range: start 0x0 length 0x20000 00:13:00.468 nvme0n1 : 5.10 2082.40 8.13 0.00 0.00 61086.03 6755.25 87112.47 00:13:00.468 [2024-11-29T15:55:11.899Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:00.468 Verification LBA range: start 0x20000 length 0x20000 00:13:00.468 nvme0n1 : 5.08 2273.83 8.88 0.00 0.00 55962.32 7158.55 68560.74 00:13:00.468 [2024-11-29T15:55:11.899Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:00.468 Verification LBA range: start 0x0 length 0x80000 00:13:00.468 nvme1n1 : 5.11 1845.96 7.21 0.00 0.00 68939.86 7662.67 86305.87 00:13:00.468 [2024-11-29T15:55:11.899Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:00.468 Verification LBA range: start 0x80000 length 0x80000 00:13:00.468 nvme1n1 : 5.07 2129.71 8.32 0.00 0.00 59736.18 16736.89 83886.08 00:13:00.468 [2024-11-29T15:55:11.899Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:00.468 Verification LBA range: start 0x0 length 0x80000 00:13:00.468 nvme1n2 : 5.11 1947.31 7.61 0.00 0.00 65033.06 5444.53 88725.66 00:13:00.468 [2024-11-29T15:55:11.899Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:00.468 Verification LBA range: start 0x80000 length 0x80000 00:13:00.468 nvme1n2 : 5.09 2139.64 8.36 0.00 0.00 59309.72 4436.28 69367.34 00:13:00.468 [2024-11-29T15:55:11.899Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:00.468 Verification LBA range: start 0x0 length 0x80000 00:13:00.468 nvme1n3 : 5.09 1868.70 7.30 0.00 0.00 67789.51 9074.22 94371.84 00:13:00.468 [2024-11-29T15:55:11.899Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:00.468 Verification LBA range: start 0x80000 length 0x80000 00:13:00.468 nvme1n3 : 5.09 2198.47 8.59 0.00 0.00 57805.99 6856.07 69367.34 00:13:00.468 [2024-11-29T15:55:11.899Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:00.468 Verification LBA range: start 0x0 length 0xbd0bd 00:13:00.468 nvme2n1 : 5.11 1951.93 7.62 0.00 0.00 64831.29 6906.49 79449.80 00:13:00.468 [2024-11-29T15:55:11.899Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:00.468 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:00.468 nvme2n1 : 5.09 2005.43 7.83 0.00 0.00 63287.11 8368.44 86305.87 00:13:00.468 [2024-11-29T15:55:11.899Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:00.468 Verification LBA range: start 0x0 length 0xa0000 00:13:00.468 nvme3n1 : 5.11 2096.42 8.19 0.00 0.00 60249.78 4713.55 80659.69 00:13:00.468 [2024-11-29T15:55:11.899Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:00.468 Verification LBA range: start 0xa0000 length 0xa0000 00:13:00.468 nvme3n1 : 5.10 2294.07 8.96 0.00 0.00 55241.98 8670.92 71787.13 00:13:00.468 [2024-11-29T15:55:11.899Z] =================================================================================================================== 00:13:00.468 [2024-11-29T15:55:11.899Z] Total : 24833.87 97.01 0.00 0.00 61320.74 4436.28 94371.84 00:13:01.040 00:13:01.040 real 0m6.986s 00:13:01.040 user 0m9.051s 00:13:01.040 sys 0m3.003s 00:13:01.040 ************************************ 00:13:01.040 END TEST bdev_verify 00:13:01.040 ************************************ 00:13:01.040 15:55:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.040 15:55:12 -- common/autotest_common.sh@10 -- # set +x 00:13:01.301 15:55:12 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:01.301 15:55:12 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:13:01.301 15:55:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.301 15:55:12 -- common/autotest_common.sh@10 -- # set +x 00:13:01.301 ************************************ 00:13:01.301 START TEST bdev_verify_big_io 00:13:01.301 ************************************ 00:13:01.301 15:55:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:01.301 [2024-11-29 15:55:12.586466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:01.301 [2024-11-29 15:55:12.587506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68391 ] 00:13:01.563 [2024-11-29 15:55:12.758388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:01.563 [2024-11-29 15:55:12.982411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.563 [2024-11-29 15:55:12.982509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.137 Running I/O for 5 seconds... 00:13:08.724 00:13:08.724 Latency(us) 00:13:08.724 [2024-11-29T15:55:20.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.724 [2024-11-29T15:55:20.156Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:08.725 Verification LBA range: start 0x0 length 0x2000 00:13:08.725 nvme0n1 : 5.73 197.17 12.32 0.00 0.00 633008.73 80659.69 864671.90 00:13:08.725 [2024-11-29T15:55:20.156Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:08.725 Verification LBA range: start 0x2000 length 0x2000 00:13:08.725 nvme0n1 : 5.47 303.13 18.95 0.00 0.00 410141.10 47589.22 535580.36 00:13:08.725 [2024-11-29T15:55:20.156Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:08.725 Verification LBA range: start 0x0 length 0x8000 00:13:08.725 nvme1n1 : 5.73 183.82 11.49 0.00 0.00 657437.20 76223.41 838860.80 00:13:08.725 [2024-11-29T15:55:20.156Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:08.725 Verification LBA range: start 0x8000 length 0x8000 00:13:08.725 nvme1n1 : 5.60 279.86 17.49 0.00 0.00 427812.16 51622.20 703352.52 00:13:08.725 [2024-11-29T15:55:20.156Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:08.725 Verification LBA range: start 0x0 length 0x8000 00:13:08.725 nvme1n2 : 5.75 211.65 13.23 0.00 0.00 553702.80 53638.70 822728.86 00:13:08.725 [2024-11-29T15:55:20.156Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:08.725 Verification LBA range: start 0x8000 length 0x8000 00:13:08.725 nvme1n2 : 5.60 263.78 16.49 0.00 0.00 454243.68 125022.52 490410.93 00:13:08.725 [2024-11-29T15:55:20.156Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:08.725 Verification LBA range: start 0x0 length 0x8000 00:13:08.725 nvme1n3 : 5.78 166.36 10.40 0.00 0.00 681526.61 39926.55 832408.02 00:13:08.725 [2024-11-29T15:55:20.156Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:08.725 Verification LBA range: start 0x8000 length 0x8000 00:13:08.725 nvme1n3 : 5.62 325.25 20.33 0.00 0.00 364069.50 12603.08 571070.62 00:13:08.725 [2024-11-29T15:55:20.156Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:08.725 Verification LBA range: start 0x0 length 0xbd0b 00:13:08.725 nvme2n1 : 5.78 224.23 14.01 0.00 0.00 497981.25 27021.00 690446.97 00:13:08.725 [2024-11-29T15:55:20.156Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:08.725 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:08.725 nvme2n1 : 5.63 329.38 20.59 0.00 0.00 356749.93 7713.08 442015.11 00:13:08.725 [2024-11-29T15:55:20.156Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:08.725 Verification LBA range: start 0x0 length 0xa000 00:13:08.725 nvme3n1 : 5.81 269.91 16.87 0.00 0.00 405050.55 1688.81 777559.43 00:13:08.725 [2024-11-29T15:55:20.156Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:08.725 Verification LBA range: start 0xa000 length 0xa000 00:13:08.725 nvme3n1 : 5.63 324.48 20.28 0.00 0.00 357508.96 12048.54 525901.19 00:13:08.725 [2024-11-29T15:55:20.156Z] =================================================================================================================== 00:13:08.725 [2024-11-29T15:55:20.156Z] Total : 3079.02 192.44 0.00 0.00 459880.52 1688.81 864671.90 00:13:08.987 00:13:08.987 real 0m7.834s 00:13:08.987 user 0m13.899s 00:13:08.987 sys 0m0.640s 00:13:08.987 15:55:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:08.987 15:55:20 -- common/autotest_common.sh@10 -- # set +x 00:13:08.987 ************************************ 00:13:08.987 END TEST bdev_verify_big_io 00:13:08.987 ************************************ 00:13:08.987 15:55:20 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:08.987 15:55:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:08.987 15:55:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:08.987 15:55:20 -- common/autotest_common.sh@10 -- # set +x 00:13:09.249 ************************************ 00:13:09.249 START TEST bdev_write_zeroes 00:13:09.249 ************************************ 00:13:09.249 15:55:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:09.249 [2024-11-29 15:55:20.495184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:09.249 [2024-11-29 15:55:20.495335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68495 ] 00:13:09.249 [2024-11-29 15:55:20.650809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.511 [2024-11-29 15:55:20.879274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.085 Running I/O for 1 seconds... 00:13:11.027 00:13:11.027 Latency(us) 00:13:11.027 [2024-11-29T15:55:22.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.027 [2024-11-29T15:55:22.458Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:11.027 nvme0n1 : 1.02 11303.18 44.15 0.00 0.00 11313.29 8368.44 21173.17 00:13:11.027 [2024-11-29T15:55:22.458Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:11.027 nvme1n1 : 1.02 11289.96 44.10 0.00 0.00 11316.04 8368.44 20064.10 00:13:11.027 [2024-11-29T15:55:22.458Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:11.027 nvme1n2 : 1.02 11277.21 44.05 0.00 0.00 11319.83 8368.44 19761.62 00:13:11.027 [2024-11-29T15:55:22.458Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:11.027 nvme1n3 : 1.03 11351.40 44.34 0.00 0.00 11235.31 5469.74 22584.71 00:13:11.027 [2024-11-29T15:55:22.458Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:11.027 nvme2n1 : 1.03 12585.63 49.16 0.00 0.00 10113.47 4889.99 16938.54 00:13:11.027 [2024-11-29T15:55:22.458Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:11.027 nvme3n1 : 1.02 11261.68 43.99 0.00 0.00 11250.24 4713.55 21576.47 00:13:11.027 [2024-11-29T15:55:22.458Z] =================================================================================================================== 00:13:11.027 [2024-11-29T15:55:22.458Z] Total : 69069.06 269.80 0.00 0.00 11072.14 4713.55 22584.71 00:13:11.971 00:13:11.971 real 0m2.771s 00:13:11.971 user 0m2.109s 00:13:11.971 sys 0m0.485s 00:13:11.971 15:55:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:11.971 15:55:23 -- common/autotest_common.sh@10 -- # set +x 00:13:11.971 ************************************ 00:13:11.971 END TEST bdev_write_zeroes 00:13:11.971 ************************************ 00:13:11.971 15:55:23 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:11.971 15:55:23 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:11.971 15:55:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:11.971 15:55:23 -- common/autotest_common.sh@10 -- # set +x 00:13:11.971 ************************************ 00:13:11.971 START TEST bdev_json_nonenclosed 00:13:11.971 ************************************ 00:13:11.971 15:55:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:11.971 [2024-11-29 15:55:23.332859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:11.971 [2024-11-29 15:55:23.333015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68548 ] 00:13:12.232 [2024-11-29 15:55:23.477190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.494 [2024-11-29 15:55:23.693131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.494 [2024-11-29 15:55:23.693322] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:12.494 [2024-11-29 15:55:23.693344] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:12.755 00:13:12.755 real 0m0.733s 00:13:12.755 user 0m0.519s 00:13:12.755 sys 0m0.107s 00:13:12.755 ************************************ 00:13:12.755 END TEST bdev_json_nonenclosed 00:13:12.755 ************************************ 00:13:12.755 15:55:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:12.755 15:55:23 -- common/autotest_common.sh@10 -- # set +x 00:13:12.755 15:55:24 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:12.755 15:55:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:12.755 15:55:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:12.755 15:55:24 -- common/autotest_common.sh@10 -- # set +x 00:13:12.755 ************************************ 00:13:12.755 START TEST bdev_json_nonarray 00:13:12.755 ************************************ 00:13:12.755 15:55:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:12.755 [2024-11-29 15:55:24.127609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:12.755 [2024-11-29 15:55:24.127755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68578 ] 00:13:13.016 [2024-11-29 15:55:24.282270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.278 [2024-11-29 15:55:24.471024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.278 [2024-11-29 15:55:24.471239] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:13.278 [2024-11-29 15:55:24.471270] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:13.539 00:13:13.539 real 0m0.720s 00:13:13.539 user 0m0.511s 00:13:13.539 sys 0m0.102s 00:13:13.539 ************************************ 00:13:13.539 END TEST bdev_json_nonarray 00:13:13.539 ************************************ 00:13:13.539 15:55:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:13.539 15:55:24 -- common/autotest_common.sh@10 -- # set +x 00:13:13.539 15:55:24 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:13:13.539 15:55:24 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:13:13.539 15:55:24 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:13:13.539 15:55:24 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:13.540 15:55:24 -- bdev/blockdev.sh@809 -- # cleanup 00:13:13.540 15:55:24 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:13.540 15:55:24 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:13.540 15:55:24 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:13:13.540 15:55:24 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:13:13.540 15:55:24 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:13:13.540 15:55:24 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:13:13.540 15:55:24 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:14.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:16.396 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:16.396 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:17.342 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:17.342 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:17.342 00:13:17.342 real 0m56.701s 00:13:17.342 user 1m22.701s 00:13:17.342 sys 0m34.674s 00:13:17.342 15:55:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:17.342 ************************************ 00:13:17.342 END TEST blockdev_xnvme 00:13:17.342 ************************************ 00:13:17.342 15:55:28 -- common/autotest_common.sh@10 -- # set +x 00:13:17.342 15:55:28 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:17.342 15:55:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:17.342 15:55:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:17.342 15:55:28 -- common/autotest_common.sh@10 -- # set +x 00:13:17.342 ************************************ 00:13:17.342 START TEST ublk 00:13:17.342 ************************************ 00:13:17.342 15:55:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:17.604 * Looking for test storage... 00:13:17.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:17.604 15:55:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:17.604 15:55:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:17.604 15:55:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:17.604 15:55:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:17.604 15:55:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:17.604 15:55:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:17.604 15:55:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:17.604 15:55:28 -- scripts/common.sh@335 -- # IFS=.-: 00:13:17.604 15:55:28 -- scripts/common.sh@335 -- # read -ra ver1 00:13:17.604 15:55:28 -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.604 15:55:28 -- scripts/common.sh@336 -- # read -ra ver2 00:13:17.604 15:55:28 -- scripts/common.sh@337 -- # local 'op=<' 00:13:17.604 15:55:28 -- scripts/common.sh@339 -- # ver1_l=2 00:13:17.604 15:55:28 -- scripts/common.sh@340 -- # ver2_l=1 00:13:17.604 15:55:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:17.604 15:55:28 -- scripts/common.sh@343 -- # case "$op" in 00:13:17.604 15:55:28 -- scripts/common.sh@344 -- # : 1 00:13:17.604 15:55:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:17.604 15:55:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.604 15:55:28 -- scripts/common.sh@364 -- # decimal 1 00:13:17.604 15:55:28 -- scripts/common.sh@352 -- # local d=1 00:13:17.604 15:55:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.604 15:55:28 -- scripts/common.sh@354 -- # echo 1 00:13:17.604 15:55:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:17.604 15:55:28 -- scripts/common.sh@365 -- # decimal 2 00:13:17.604 15:55:28 -- scripts/common.sh@352 -- # local d=2 00:13:17.604 15:55:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.604 15:55:28 -- scripts/common.sh@354 -- # echo 2 00:13:17.604 15:55:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:17.604 15:55:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:17.604 15:55:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:17.604 15:55:28 -- scripts/common.sh@367 -- # return 0 00:13:17.604 15:55:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.604 15:55:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:17.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.604 --rc genhtml_branch_coverage=1 00:13:17.604 --rc genhtml_function_coverage=1 00:13:17.604 --rc genhtml_legend=1 00:13:17.604 --rc geninfo_all_blocks=1 00:13:17.604 --rc geninfo_unexecuted_blocks=1 00:13:17.604 00:13:17.604 ' 00:13:17.604 15:55:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:17.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.604 --rc genhtml_branch_coverage=1 00:13:17.604 --rc genhtml_function_coverage=1 00:13:17.604 --rc genhtml_legend=1 00:13:17.604 --rc geninfo_all_blocks=1 00:13:17.604 --rc geninfo_unexecuted_blocks=1 00:13:17.604 00:13:17.604 ' 00:13:17.604 15:55:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:17.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.604 --rc genhtml_branch_coverage=1 00:13:17.604 --rc genhtml_function_coverage=1 00:13:17.604 --rc genhtml_legend=1 00:13:17.604 --rc geninfo_all_blocks=1 00:13:17.604 --rc geninfo_unexecuted_blocks=1 00:13:17.604 00:13:17.604 ' 00:13:17.604 15:55:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:17.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.604 --rc genhtml_branch_coverage=1 00:13:17.604 --rc genhtml_function_coverage=1 00:13:17.604 --rc genhtml_legend=1 00:13:17.604 --rc geninfo_all_blocks=1 00:13:17.604 --rc geninfo_unexecuted_blocks=1 00:13:17.604 00:13:17.604 ' 00:13:17.604 15:55:28 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:17.604 15:55:28 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:17.604 15:55:28 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:17.604 15:55:28 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:17.604 15:55:28 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:17.604 15:55:28 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:17.604 15:55:28 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:17.604 15:55:28 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:17.604 15:55:28 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:17.604 15:55:28 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:17.604 15:55:28 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:17.604 15:55:28 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:17.604 15:55:28 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:17.604 15:55:28 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:17.604 15:55:28 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:17.604 15:55:28 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:17.604 15:55:28 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:17.604 15:55:28 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:17.604 15:55:28 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:17.604 15:55:28 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:17.604 15:55:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:17.604 15:55:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:17.604 15:55:28 -- common/autotest_common.sh@10 -- # set +x 00:13:17.604 ************************************ 00:13:17.604 START TEST test_save_ublk_config 00:13:17.604 ************************************ 00:13:17.604 15:55:28 -- common/autotest_common.sh@1114 -- # test_save_config 00:13:17.604 15:55:28 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:17.604 15:55:28 -- ublk/ublk.sh@103 -- # tgtpid=68889 00:13:17.604 15:55:28 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:17.604 15:55:28 -- ublk/ublk.sh@106 -- # waitforlisten 68889 00:13:17.604 15:55:28 -- common/autotest_common.sh@829 -- # '[' -z 68889 ']' 00:13:17.604 15:55:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.604 15:55:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:17.604 15:55:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.604 15:55:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:17.604 15:55:28 -- common/autotest_common.sh@10 -- # set +x 00:13:17.604 15:55:28 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:17.604 [2024-11-29 15:55:29.014890] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:17.604 [2024-11-29 15:55:29.015044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68889 ] 00:13:17.867 [2024-11-29 15:55:29.168576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.128 [2024-11-29 15:55:29.400542] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:18.128 [2024-11-29 15:55:29.400781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.518 15:55:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.518 15:55:30 -- common/autotest_common.sh@862 -- # return 0 00:13:19.518 15:55:30 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:19.518 15:55:30 -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:19.518 15:55:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.518 15:55:30 -- common/autotest_common.sh@10 -- # set +x 00:13:19.518 [2024-11-29 15:55:30.557837] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:19.518 malloc0 00:13:19.518 [2024-11-29 15:55:30.629130] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:19.518 [2024-11-29 15:55:30.629230] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:19.518 [2024-11-29 15:55:30.629239] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:19.518 [2024-11-29 15:55:30.629249] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:19.518 [2024-11-29 15:55:30.638098] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:19.518 [2024-11-29 15:55:30.638136] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:19.518 [2024-11-29 15:55:30.645007] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:19.518 [2024-11-29 15:55:30.645130] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:19.518 [2024-11-29 15:55:30.662000] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:19.518 0 00:13:19.518 15:55:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.518 15:55:30 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:19.518 15:55:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.518 15:55:30 -- common/autotest_common.sh@10 -- # set +x 00:13:19.518 15:55:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.518 15:55:30 -- ublk/ublk.sh@115 -- # config='{ 00:13:19.518 "subsystems": [ 00:13:19.518 { 00:13:19.518 "subsystem": "iobuf", 00:13:19.518 "config": [ 00:13:19.518 { 00:13:19.518 "method": "iobuf_set_options", 00:13:19.518 "params": { 00:13:19.518 "small_pool_count": 8192, 00:13:19.518 "large_pool_count": 1024, 00:13:19.518 "small_bufsize": 8192, 00:13:19.518 "large_bufsize": 135168 00:13:19.518 } 00:13:19.518 } 00:13:19.518 ] 00:13:19.518 }, 00:13:19.518 { 00:13:19.518 "subsystem": "sock", 00:13:19.518 "config": [ 00:13:19.518 { 00:13:19.518 "method": "sock_impl_set_options", 00:13:19.518 "params": { 00:13:19.518 "impl_name": "posix", 00:13:19.518 "recv_buf_size": 2097152, 00:13:19.518 "send_buf_size": 2097152, 00:13:19.518 "enable_recv_pipe": true, 00:13:19.518 "enable_quickack": false, 00:13:19.518 "enable_placement_id": 0, 00:13:19.518 "enable_zerocopy_send_server": true, 00:13:19.518 "enable_zerocopy_send_client": false, 00:13:19.518 "zerocopy_threshold": 0, 00:13:19.518 "tls_version": 0, 00:13:19.518 "enable_ktls": false 00:13:19.518 } 00:13:19.518 }, 00:13:19.518 { 00:13:19.518 "method": "sock_impl_set_options", 00:13:19.518 "params": { 00:13:19.518 "impl_name": "ssl", 00:13:19.518 "recv_buf_size": 4096, 00:13:19.518 "send_buf_size": 4096, 00:13:19.518 "enable_recv_pipe": true, 00:13:19.518 "enable_quickack": false, 00:13:19.519 "enable_placement_id": 0, 00:13:19.519 "enable_zerocopy_send_server": true, 00:13:19.519 "enable_zerocopy_send_client": false, 00:13:19.519 "zerocopy_threshold": 0, 00:13:19.519 "tls_version": 0, 00:13:19.519 "enable_ktls": false 00:13:19.519 } 00:13:19.519 } 00:13:19.519 ] 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "subsystem": "vmd", 00:13:19.519 "config": [] 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "subsystem": "accel", 00:13:19.519 "config": [ 00:13:19.519 { 00:13:19.519 "method": "accel_set_options", 00:13:19.519 "params": { 00:13:19.519 "small_cache_size": 128, 00:13:19.519 "large_cache_size": 16, 00:13:19.519 "task_count": 2048, 00:13:19.519 "sequence_count": 2048, 00:13:19.519 "buf_count": 2048 00:13:19.519 } 00:13:19.519 } 00:13:19.519 ] 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "subsystem": "bdev", 00:13:19.519 "config": [ 00:13:19.519 { 00:13:19.519 "method": "bdev_set_options", 00:13:19.519 "params": { 00:13:19.519 "bdev_io_pool_size": 65535, 00:13:19.519 "bdev_io_cache_size": 256, 00:13:19.519 "bdev_auto_examine": true, 00:13:19.519 "iobuf_small_cache_size": 128, 00:13:19.519 "iobuf_large_cache_size": 16 00:13:19.519 } 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "method": "bdev_raid_set_options", 00:13:19.519 "params": { 00:13:19.519 "process_window_size_kb": 1024 00:13:19.519 } 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "method": "bdev_iscsi_set_options", 00:13:19.519 "params": { 00:13:19.519 "timeout_sec": 30 00:13:19.519 } 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "method": "bdev_nvme_set_options", 00:13:19.519 "params": { 00:13:19.519 "action_on_timeout": "none", 00:13:19.519 "timeout_us": 0, 00:13:19.519 "timeout_admin_us": 0, 00:13:19.519 "keep_alive_timeout_ms": 10000, 00:13:19.519 "transport_retry_count": 4, 00:13:19.519 "arbitration_burst": 0, 00:13:19.519 "low_priority_weight": 0, 00:13:19.519 "medium_priority_weight": 0, 00:13:19.519 "high_priority_weight": 0, 00:13:19.519 "nvme_adminq_poll_period_us": 10000, 00:13:19.519 "nvme_ioq_poll_period_us": 0, 00:13:19.519 "io_queue_requests": 0, 00:13:19.519 "delay_cmd_submit": true, 00:13:19.519 "bdev_retry_count": 3, 00:13:19.519 "transport_ack_timeout": 0, 00:13:19.519 "ctrlr_loss_timeout_sec": 0, 00:13:19.519 "reconnect_delay_sec": 0, 00:13:19.519 "fast_io_fail_timeout_sec": 0, 00:13:19.519 "generate_uuids": false, 00:13:19.519 "transport_tos": 0, 00:13:19.519 "io_path_stat": false, 00:13:19.519 "allow_accel_sequence": false 00:13:19.519 } 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "method": "bdev_nvme_set_hotplug", 00:13:19.519 "params": { 00:13:19.519 "period_us": 100000, 00:13:19.519 "enable": false 00:13:19.519 } 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "method": "bdev_malloc_create", 00:13:19.519 "params": { 00:13:19.519 "name": "malloc0", 00:13:19.519 "num_blocks": 8192, 00:13:19.519 "block_size": 4096, 00:13:19.519 "physical_block_size": 4096, 00:13:19.519 "uuid": "e68ba3a8-e76f-469e-8ac2-d06cfe36ab40", 00:13:19.519 "optimal_io_boundary": 0 00:13:19.519 } 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "method": "bdev_wait_for_examine" 00:13:19.519 } 00:13:19.519 ] 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "subsystem": "scsi", 00:13:19.519 "config": null 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "subsystem": "scheduler", 00:13:19.519 "config": [ 00:13:19.519 { 00:13:19.519 "method": "framework_set_scheduler", 00:13:19.519 "params": { 00:13:19.519 "name": "static" 00:13:19.519 } 00:13:19.519 } 00:13:19.519 ] 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "subsystem": "vhost_scsi", 00:13:19.519 "config": [] 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "subsystem": "vhost_blk", 00:13:19.519 "config": [] 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "subsystem": "ublk", 00:13:19.519 "config": [ 00:13:19.519 { 00:13:19.519 "method": "ublk_create_target", 00:13:19.519 "params": { 00:13:19.519 "cpumask": "1" 00:13:19.519 } 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "method": "ublk_start_disk", 00:13:19.519 "params": { 00:13:19.519 "bdev_name": "malloc0", 00:13:19.519 "ublk_id": 0, 00:13:19.519 "num_queues": 1, 00:13:19.519 "queue_depth": 128 00:13:19.519 } 00:13:19.519 } 00:13:19.519 ] 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "subsystem": "nbd", 00:13:19.519 "config": [] 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "subsystem": "nvmf", 00:13:19.519 "config": [ 00:13:19.519 { 00:13:19.519 "method": "nvmf_set_config", 00:13:19.519 "params": { 00:13:19.519 "discovery_filter": "match_any", 00:13:19.519 "admin_cmd_passthru": { 00:13:19.519 "identify_ctrlr": false 00:13:19.519 } 00:13:19.519 } 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "method": "nvmf_set_max_subsystems", 00:13:19.519 "params": { 00:13:19.519 "max_subsystems": 1024 00:13:19.519 } 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "method": "nvmf_set_crdt", 00:13:19.519 "params": { 00:13:19.519 "crdt1": 0, 00:13:19.519 "crdt2": 0, 00:13:19.519 "crdt3": 0 00:13:19.519 } 00:13:19.519 } 00:13:19.519 ] 00:13:19.519 }, 00:13:19.519 { 00:13:19.519 "subsystem": "iscsi", 00:13:19.519 "config": [ 00:13:19.519 { 00:13:19.519 "method": "iscsi_set_options", 00:13:19.519 "params": { 00:13:19.519 "node_base": "iqn.2016-06.io.spdk", 00:13:19.519 "max_sessions": 128, 00:13:19.519 "max_connections_per_session": 2, 00:13:19.519 "max_queue_depth": 64, 00:13:19.519 "default_time2wait": 2, 00:13:19.519 "default_time2retain": 20, 00:13:19.519 "first_burst_length": 8192, 00:13:19.519 "immediate_data": true, 00:13:19.519 "allow_duplicated_isid": false, 00:13:19.519 "error_recovery_level": 0, 00:13:19.519 "nop_timeout": 60, 00:13:19.519 "nop_in_interval": 30, 00:13:19.519 "disable_chap": false, 00:13:19.519 "require_chap": false, 00:13:19.519 "mutual_chap": false, 00:13:19.519 "chap_group": 0, 00:13:19.519 "max_large_datain_per_connection": 64, 00:13:19.519 "max_r2t_per_connection": 4, 00:13:19.519 "pdu_pool_size": 36864, 00:13:19.519 "immediate_data_pool_size": 16384, 00:13:19.519 "data_out_pool_size": 2048 00:13:19.519 } 00:13:19.519 } 00:13:19.519 ] 00:13:19.519 } 00:13:19.519 ] 00:13:19.519 }' 00:13:19.519 15:55:30 -- ublk/ublk.sh@116 -- # killprocess 68889 00:13:19.519 15:55:30 -- common/autotest_common.sh@936 -- # '[' -z 68889 ']' 00:13:19.519 15:55:30 -- common/autotest_common.sh@940 -- # kill -0 68889 00:13:19.519 15:55:30 -- common/autotest_common.sh@941 -- # uname 00:13:19.519 15:55:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:19.519 15:55:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68889 00:13:19.519 15:55:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:19.519 15:55:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:19.519 killing process with pid 68889 00:13:19.519 15:55:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68889' 00:13:19.519 15:55:30 -- common/autotest_common.sh@955 -- # kill 68889 00:13:19.519 15:55:30 -- common/autotest_common.sh@960 -- # wait 68889 00:13:20.907 [2024-11-29 15:55:32.067674] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:20.907 [2024-11-29 15:55:32.104113] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:20.907 [2024-11-29 15:55:32.104267] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:20.907 [2024-11-29 15:55:32.105300] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:20.907 [2024-11-29 15:55:32.105355] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:20.907 [2024-11-29 15:55:32.105370] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:20.907 [2024-11-29 15:55:32.105402] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:20.907 [2024-11-29 15:55:32.105553] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:22.291 15:55:33 -- ublk/ublk.sh@119 -- # tgtpid=68951 00:13:22.291 15:55:33 -- ublk/ublk.sh@121 -- # waitforlisten 68951 00:13:22.291 15:55:33 -- common/autotest_common.sh@829 -- # '[' -z 68951 ']' 00:13:22.291 15:55:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.291 15:55:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.291 15:55:33 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:22.291 15:55:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.291 15:55:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.291 15:55:33 -- common/autotest_common.sh@10 -- # set +x 00:13:22.291 15:55:33 -- ublk/ublk.sh@118 -- # echo '{ 00:13:22.291 "subsystems": [ 00:13:22.291 { 00:13:22.291 "subsystem": "iobuf", 00:13:22.291 "config": [ 00:13:22.291 { 00:13:22.291 "method": "iobuf_set_options", 00:13:22.291 "params": { 00:13:22.291 "small_pool_count": 8192, 00:13:22.291 "large_pool_count": 1024, 00:13:22.291 "small_bufsize": 8192, 00:13:22.291 "large_bufsize": 135168 00:13:22.291 } 00:13:22.291 } 00:13:22.291 ] 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "subsystem": "sock", 00:13:22.291 "config": [ 00:13:22.291 { 00:13:22.291 "method": "sock_impl_set_options", 00:13:22.291 "params": { 00:13:22.291 "impl_name": "posix", 00:13:22.291 "recv_buf_size": 2097152, 00:13:22.291 "send_buf_size": 2097152, 00:13:22.291 "enable_recv_pipe": true, 00:13:22.291 "enable_quickack": false, 00:13:22.291 "enable_placement_id": 0, 00:13:22.291 "enable_zerocopy_send_server": true, 00:13:22.291 "enable_zerocopy_send_client": false, 00:13:22.291 "zerocopy_threshold": 0, 00:13:22.291 "tls_version": 0, 00:13:22.291 "enable_ktls": false 00:13:22.291 } 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "method": "sock_impl_set_options", 00:13:22.291 "params": { 00:13:22.291 "impl_name": "ssl", 00:13:22.291 "recv_buf_size": 4096, 00:13:22.291 "send_buf_size": 4096, 00:13:22.291 "enable_recv_pipe": true, 00:13:22.291 "enable_quickack": false, 00:13:22.291 "enable_placement_id": 0, 00:13:22.291 "enable_zerocopy_send_server": true, 00:13:22.291 "enable_zerocopy_send_client": false, 00:13:22.291 "zerocopy_threshold": 0, 00:13:22.291 "tls_version": 0, 00:13:22.291 "enable_ktls": false 00:13:22.291 } 00:13:22.291 } 00:13:22.291 ] 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "subsystem": "vmd", 00:13:22.291 "config": [] 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "subsystem": "accel", 00:13:22.291 "config": [ 00:13:22.291 { 00:13:22.291 "method": "accel_set_options", 00:13:22.291 "params": { 00:13:22.291 "small_cache_size": 128, 00:13:22.291 "large_cache_size": 16, 00:13:22.291 "task_count": 2048, 00:13:22.291 "sequence_count": 2048, 00:13:22.291 "buf_count": 2048 00:13:22.291 } 00:13:22.291 } 00:13:22.291 ] 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "subsystem": "bdev", 00:13:22.291 "config": [ 00:13:22.291 { 00:13:22.291 "method": "bdev_set_options", 00:13:22.291 "params": { 00:13:22.291 "bdev_io_pool_size": 65535, 00:13:22.291 "bdev_io_cache_size": 256, 00:13:22.291 "bdev_auto_examine": true, 00:13:22.291 "iobuf_small_cache_size": 128, 00:13:22.291 "iobuf_large_cache_size": 16 00:13:22.291 } 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "method": "bdev_raid_set_options", 00:13:22.291 "params": { 00:13:22.291 "process_window_size_kb": 1024 00:13:22.291 } 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "method": "bdev_iscsi_set_options", 00:13:22.291 "params": { 00:13:22.291 "timeout_sec": 30 00:13:22.291 } 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "method": "bdev_nvme_set_options", 00:13:22.291 "params": { 00:13:22.291 "action_on_timeout": "none", 00:13:22.291 "timeout_us": 0, 00:13:22.291 "timeout_admin_us": 0, 00:13:22.291 "keep_alive_timeout_ms": 10000, 00:13:22.291 "transport_retry_count": 4, 00:13:22.291 "arbitration_burst": 0, 00:13:22.291 "low_priority_weight": 0, 00:13:22.291 "medium_priority_weight": 0, 00:13:22.291 "high_priority_weight": 0, 00:13:22.291 "nvme_adminq_poll_period_us": 10000, 00:13:22.291 "nvme_ioq_poll_period_us": 0, 00:13:22.291 "io_queue_requests": 0, 00:13:22.291 "delay_cmd_submit": true, 00:13:22.291 "bdev_retry_count": 3, 00:13:22.291 "transport_ack_timeout": 0, 00:13:22.291 "ctrlr_loss_timeout_sec": 0, 00:13:22.291 "reconnect_delay_sec": 0, 00:13:22.291 "fast_io_fail_timeout_sec": 0, 00:13:22.291 "generate_uuids": false, 00:13:22.291 "transport_tos": 0, 00:13:22.291 "io_path_stat": false, 00:13:22.291 "allow_accel_sequence": false 00:13:22.291 } 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "method": "bdev_nvme_set_hotplug", 00:13:22.291 "params": { 00:13:22.291 "period_us": 100000, 00:13:22.291 "enable": false 00:13:22.291 } 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "method": "bdev_malloc_create", 00:13:22.291 "params": { 00:13:22.291 "name": "malloc0", 00:13:22.291 "num_blocks": 8192, 00:13:22.291 "block_size": 4096, 00:13:22.291 "physical_block_size": 4096, 00:13:22.291 "uuid": "e68ba3a8-e76f-469e-8ac2-d06cfe36ab40", 00:13:22.291 "optimal_io_boundary": 0 00:13:22.291 } 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "method": "bdev_wait_for_examine" 00:13:22.291 } 00:13:22.291 ] 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "subsystem": "scsi", 00:13:22.291 "config": null 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "subsystem": "scheduler", 00:13:22.291 "config": [ 00:13:22.291 { 00:13:22.291 "method": "framework_set_scheduler", 00:13:22.291 "params": { 00:13:22.291 "name": "static" 00:13:22.291 } 00:13:22.291 } 00:13:22.291 ] 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "subsystem": "vhost_scsi", 00:13:22.291 "config": [] 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "subsystem": "vhost_blk", 00:13:22.291 "config": [] 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "subsystem": "ublk", 00:13:22.291 "config": [ 00:13:22.291 { 00:13:22.291 "method": "ublk_create_target", 00:13:22.291 "params": { 00:13:22.291 "cpumask": "1" 00:13:22.291 } 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "method": "ublk_start_disk", 00:13:22.291 "params": { 00:13:22.291 "bdev_name": "malloc0", 00:13:22.291 "ublk_id": 0, 00:13:22.291 "num_queues": 1, 00:13:22.291 "queue_depth": 128 00:13:22.291 } 00:13:22.291 } 00:13:22.291 ] 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "subsystem": "nbd", 00:13:22.291 "config": [] 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "subsystem": "nvmf", 00:13:22.291 "config": [ 00:13:22.291 { 00:13:22.291 "method": "nvmf_set_config", 00:13:22.291 "params": { 00:13:22.291 "discovery_filter": "match_any", 00:13:22.291 "admin_cmd_passthru": { 00:13:22.291 "identify_ctrlr": false 00:13:22.291 } 00:13:22.291 } 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "method": "nvmf_set_max_subsystems", 00:13:22.291 "params": { 00:13:22.291 "max_subsystems": 1024 00:13:22.291 } 00:13:22.291 }, 00:13:22.291 { 00:13:22.291 "method": "nvmf_set_crdt", 00:13:22.292 "params": { 00:13:22.292 "crdt1": 0, 00:13:22.292 "crdt2": 0, 00:13:22.292 "crdt3": 0 00:13:22.292 } 00:13:22.292 } 00:13:22.292 ] 00:13:22.292 }, 00:13:22.292 { 00:13:22.292 "subsystem": "iscsi", 00:13:22.292 "config": [ 00:13:22.292 { 00:13:22.292 "method": "iscsi_set_options", 00:13:22.292 "params": { 00:13:22.292 "node_base": "iqn.2016-06.io.spdk", 00:13:22.292 "max_sessions": 128, 00:13:22.292 "max_connections_per_session": 2, 00:13:22.292 "max_queue_depth": 64, 00:13:22.292 "default_time2wait": 2, 00:13:22.292 "default_time2retain": 20, 00:13:22.292 "first_burst_length": 8192, 00:13:22.292 "immediate_data": true, 00:13:22.292 "allow_duplicated_isid": false, 00:13:22.292 "error_recovery_level": 0, 00:13:22.292 "nop_timeout": 60, 00:13:22.292 "nop_in_interval": 30, 00:13:22.292 "disable_chap": false, 00:13:22.292 "require_chap": false, 00:13:22.292 "mutual_chap": false, 00:13:22.292 "chap_group": 0, 00:13:22.292 "max_large_datain_per_connection": 64, 00:13:22.292 "max_r2t_per_connection": 4, 00:13:22.292 "pdu_pool_size": 36864, 00:13:22.292 "immediate_data_pool_size": 16384, 00:13:22.292 "data_out_pool_size": 2048 00:13:22.292 } 00:13:22.292 } 00:13:22.292 ] 00:13:22.292 } 00:13:22.292 ] 00:13:22.292 }' 00:13:22.292 [2024-11-29 15:55:33.473786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:22.292 [2024-11-29 15:55:33.473908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68951 ] 00:13:22.292 [2024-11-29 15:55:33.623203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.551 [2024-11-29 15:55:33.794100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:22.551 [2024-11-29 15:55:33.794277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.118 [2024-11-29 15:55:34.380583] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:23.119 [2024-11-29 15:55:34.388071] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:23.119 [2024-11-29 15:55:34.388128] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:23.119 [2024-11-29 15:55:34.388134] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:23.119 [2024-11-29 15:55:34.388140] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:23.119 [2024-11-29 15:55:34.397042] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:23.119 [2024-11-29 15:55:34.397058] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:23.119 [2024-11-29 15:55:34.403990] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:23.119 [2024-11-29 15:55:34.404065] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:23.119 [2024-11-29 15:55:34.420990] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:23.685 15:55:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.685 15:55:34 -- common/autotest_common.sh@862 -- # return 0 00:13:23.685 15:55:34 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:23.685 15:55:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.685 15:55:34 -- common/autotest_common.sh@10 -- # set +x 00:13:23.685 15:55:34 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:23.685 15:55:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.685 15:55:35 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:23.685 15:55:35 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:23.685 15:55:35 -- ublk/ublk.sh@125 -- # killprocess 68951 00:13:23.685 15:55:35 -- common/autotest_common.sh@936 -- # '[' -z 68951 ']' 00:13:23.685 15:55:35 -- common/autotest_common.sh@940 -- # kill -0 68951 00:13:23.685 15:55:35 -- common/autotest_common.sh@941 -- # uname 00:13:23.685 15:55:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:23.685 15:55:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68951 00:13:23.685 15:55:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:23.685 15:55:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:23.685 killing process with pid 68951 00:13:23.685 15:55:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68951' 00:13:23.685 15:55:35 -- common/autotest_common.sh@955 -- # kill 68951 00:13:23.685 15:55:35 -- common/autotest_common.sh@960 -- # wait 68951 00:13:24.615 [2024-11-29 15:55:35.768261] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:24.615 [2024-11-29 15:55:35.802056] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:24.615 [2024-11-29 15:55:35.802146] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:24.615 [2024-11-29 15:55:35.809996] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:24.615 [2024-11-29 15:55:35.810036] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:24.615 [2024-11-29 15:55:35.810042] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:24.615 [2024-11-29 15:55:35.810062] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:24.615 [2024-11-29 15:55:35.810171] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:25.993 15:55:36 -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:25.993 ************************************ 00:13:25.993 END TEST test_save_ublk_config 00:13:25.993 ************************************ 00:13:25.993 00:13:25.993 real 0m8.057s 00:13:25.993 user 0m6.142s 00:13:25.993 sys 0m2.899s 00:13:25.993 15:55:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:25.993 15:55:36 -- common/autotest_common.sh@10 -- # set +x 00:13:25.993 15:55:37 -- ublk/ublk.sh@139 -- # spdk_pid=69026 00:13:25.993 15:55:37 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:25.993 15:55:37 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:25.993 15:55:37 -- ublk/ublk.sh@141 -- # waitforlisten 69026 00:13:25.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.993 15:55:37 -- common/autotest_common.sh@829 -- # '[' -z 69026 ']' 00:13:25.993 15:55:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.993 15:55:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:25.993 15:55:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.993 15:55:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:25.993 15:55:37 -- common/autotest_common.sh@10 -- # set +x 00:13:25.993 [2024-11-29 15:55:37.094299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:25.993 [2024-11-29 15:55:37.094559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69026 ] 00:13:25.993 [2024-11-29 15:55:37.242354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:26.256 [2024-11-29 15:55:37.474758] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:26.256 [2024-11-29 15:55:37.475466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.256 [2024-11-29 15:55:37.475600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.201 15:55:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.202 15:55:38 -- common/autotest_common.sh@862 -- # return 0 00:13:27.202 15:55:38 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:27.202 15:55:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:27.202 15:55:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:27.202 15:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:27.463 ************************************ 00:13:27.463 START TEST test_create_ublk 00:13:27.463 ************************************ 00:13:27.463 15:55:38 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:13:27.463 15:55:38 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:27.463 15:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.463 15:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:27.463 [2024-11-29 15:55:38.655310] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:27.463 15:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.463 15:55:38 -- ublk/ublk.sh@33 -- # ublk_target= 00:13:27.463 15:55:38 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:27.463 15:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.463 15:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:27.463 15:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.463 15:55:38 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:27.463 15:55:38 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:27.463 15:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.463 15:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:27.463 [2024-11-29 15:55:38.891182] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:27.463 [2024-11-29 15:55:38.891631] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:27.463 [2024-11-29 15:55:38.891648] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:27.463 [2024-11-29 15:55:38.891658] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:27.723 [2024-11-29 15:55:38.900934] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:27.723 [2024-11-29 15:55:38.900988] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:27.723 [2024-11-29 15:55:38.908013] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:27.723 [2024-11-29 15:55:38.920250] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:27.723 [2024-11-29 15:55:38.936128] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:27.723 15:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.723 15:55:38 -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:27.723 15:55:38 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:27.723 15:55:38 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:27.723 15:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.723 15:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:27.723 15:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.723 15:55:38 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:27.723 { 00:13:27.723 "ublk_device": "/dev/ublkb0", 00:13:27.723 "id": 0, 00:13:27.723 "queue_depth": 512, 00:13:27.723 "num_queues": 4, 00:13:27.723 "bdev_name": "Malloc0" 00:13:27.723 } 00:13:27.723 ]' 00:13:27.723 15:55:38 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:27.723 15:55:38 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:27.723 15:55:38 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:27.723 15:55:39 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:27.723 15:55:39 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:27.724 15:55:39 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:27.724 15:55:39 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:27.724 15:55:39 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:27.724 15:55:39 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:27.724 15:55:39 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:27.724 15:55:39 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:27.724 15:55:39 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:27.724 15:55:39 -- lvol/common.sh@41 -- # local offset=0 00:13:27.724 15:55:39 -- lvol/common.sh@42 -- # local size=134217728 00:13:27.724 15:55:39 -- lvol/common.sh@43 -- # local rw=write 00:13:27.724 15:55:39 -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:27.724 15:55:39 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:27.724 15:55:39 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:27.724 15:55:39 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:27.724 15:55:39 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:27.724 15:55:39 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:27.724 15:55:39 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:27.982 fio: verification read phase will never start because write phase uses all of runtime 00:13:27.982 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:27.982 fio-3.35 00:13:27.982 Starting 1 process 00:13:37.953 00:13:37.953 fio_test: (groupid=0, jobs=1): err= 0: pid=69080: Fri Nov 29 15:55:49 2024 00:13:37.953 write: IOPS=14.6k, BW=57.2MiB/s (60.0MB/s)(572MiB/10001msec); 0 zone resets 00:13:37.953 clat (usec): min=40, max=11104, avg=67.63, stdev=120.50 00:13:37.953 lat (usec): min=41, max=11121, avg=68.00, stdev=120.52 00:13:37.953 clat percentiles (usec): 00:13:37.953 | 1.00th=[ 54], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 59], 00:13:37.953 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 64], 00:13:37.953 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 70], 95.00th=[ 73], 00:13:37.953 | 99.00th=[ 82], 99.50th=[ 91], 99.90th=[ 2671], 99.95th=[ 3294], 00:13:37.953 | 99.99th=[ 3884] 00:13:37.953 bw ( KiB/s): min=27544, max=61168, per=99.89%, avg=58511.32, stdev=7681.84, samples=19 00:13:37.953 iops : min= 6886, max=15292, avg=14627.79, stdev=1920.45, samples=19 00:13:37.953 lat (usec) : 50=0.02%, 100=99.58%, 250=0.19%, 500=0.02%, 750=0.01% 00:13:37.953 lat (usec) : 1000=0.01% 00:13:37.953 lat (msec) : 2=0.05%, 4=0.12%, 10=0.01%, 20=0.01% 00:13:37.953 cpu : usr=2.09%, sys=9.49%, ctx=146468, majf=0, minf=797 00:13:37.953 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.953 issued rwts: total=0,146454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.953 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.953 00:13:37.953 Run status group 0 (all jobs): 00:13:37.953 WRITE: bw=57.2MiB/s (60.0MB/s), 57.2MiB/s-57.2MiB/s (60.0MB/s-60.0MB/s), io=572MiB (600MB), run=10001-10001msec 00:13:37.953 00:13:37.953 Disk stats (read/write): 00:13:37.953 ublkb0: ios=0/144890, merge=0/0, ticks=0/8822, in_queue=8823, util=99.10% 00:13:37.953 15:55:49 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:37.953 15:55:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.953 15:55:49 -- common/autotest_common.sh@10 -- # set +x 00:13:37.953 [2024-11-29 15:55:49.345933] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:38.212 [2024-11-29 15:55:49.390503] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:38.212 [2024-11-29 15:55:49.394652] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:38.212 [2024-11-29 15:55:49.403990] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:38.212 [2024-11-29 15:55:49.404256] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:38.212 [2024-11-29 15:55:49.404266] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:38.212 15:55:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.212 15:55:49 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:38.212 15:55:49 -- common/autotest_common.sh@650 -- # local es=0 00:13:38.212 15:55:49 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:38.212 15:55:49 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:38.212 15:55:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:38.212 15:55:49 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:38.212 15:55:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:38.212 15:55:49 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:38.212 15:55:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.212 15:55:49 -- common/autotest_common.sh@10 -- # set +x 00:13:38.212 [2024-11-29 15:55:49.415083] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:38.212 request: 00:13:38.212 { 00:13:38.212 "ublk_id": 0, 00:13:38.212 "method": "ublk_stop_disk", 00:13:38.212 "req_id": 1 00:13:38.212 } 00:13:38.212 Got JSON-RPC error response 00:13:38.212 response: 00:13:38.212 { 00:13:38.212 "code": -19, 00:13:38.212 "message": "No such device" 00:13:38.212 } 00:13:38.212 15:55:49 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:38.212 15:55:49 -- common/autotest_common.sh@653 -- # es=1 00:13:38.212 15:55:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:38.212 15:55:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:38.212 15:55:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:38.212 15:55:49 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:38.212 15:55:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.212 15:55:49 -- common/autotest_common.sh@10 -- # set +x 00:13:38.212 [2024-11-29 15:55:49.428036] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:38.212 [2024-11-29 15:55:49.435990] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:38.212 [2024-11-29 15:55:49.436016] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:38.212 15:55:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.212 15:55:49 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:38.212 15:55:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.212 15:55:49 -- common/autotest_common.sh@10 -- # set +x 00:13:38.471 15:55:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.471 15:55:49 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:38.471 15:55:49 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:38.471 15:55:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.471 15:55:49 -- common/autotest_common.sh@10 -- # set +x 00:13:38.471 15:55:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.471 15:55:49 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:38.471 15:55:49 -- lvol/common.sh@26 -- # jq length 00:13:38.471 15:55:49 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:38.471 15:55:49 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:38.471 15:55:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.471 15:55:49 -- common/autotest_common.sh@10 -- # set +x 00:13:38.471 15:55:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.471 15:55:49 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:38.471 15:55:49 -- lvol/common.sh@28 -- # jq length 00:13:38.471 ************************************ 00:13:38.471 END TEST test_create_ublk 00:13:38.471 ************************************ 00:13:38.471 15:55:49 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:38.471 00:13:38.471 real 0m11.242s 00:13:38.471 user 0m0.504s 00:13:38.471 sys 0m1.024s 00:13:38.471 15:55:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:38.471 15:55:49 -- common/autotest_common.sh@10 -- # set +x 00:13:38.730 15:55:49 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:38.730 15:55:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:38.730 15:55:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:38.730 15:55:49 -- common/autotest_common.sh@10 -- # set +x 00:13:38.730 ************************************ 00:13:38.730 START TEST test_create_multi_ublk 00:13:38.730 ************************************ 00:13:38.730 15:55:49 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:13:38.730 15:55:49 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:38.730 15:55:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.730 15:55:49 -- common/autotest_common.sh@10 -- # set +x 00:13:38.730 [2024-11-29 15:55:49.943477] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:38.730 15:55:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.730 15:55:49 -- ublk/ublk.sh@62 -- # ublk_target= 00:13:38.730 15:55:49 -- ublk/ublk.sh@64 -- # seq 0 3 00:13:38.730 15:55:49 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:38.730 15:55:49 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:38.730 15:55:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.730 15:55:49 -- common/autotest_common.sh@10 -- # set +x 00:13:38.990 15:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.990 15:55:50 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:38.990 15:55:50 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:38.990 15:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.990 15:55:50 -- common/autotest_common.sh@10 -- # set +x 00:13:38.990 [2024-11-29 15:55:50.177092] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:38.990 [2024-11-29 15:55:50.177404] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:38.990 [2024-11-29 15:55:50.177416] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:38.990 [2024-11-29 15:55:50.177423] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:38.990 [2024-11-29 15:55:50.189030] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:38.990 [2024-11-29 15:55:50.189051] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:38.990 [2024-11-29 15:55:50.200996] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:38.990 [2024-11-29 15:55:50.201502] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:38.990 [2024-11-29 15:55:50.229005] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:38.990 15:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.990 15:55:50 -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:38.990 15:55:50 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:38.990 15:55:50 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:38.990 15:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.990 15:55:50 -- common/autotest_common.sh@10 -- # set +x 00:13:39.249 15:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.249 15:55:50 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:39.249 15:55:50 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:39.249 15:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.249 15:55:50 -- common/autotest_common.sh@10 -- # set +x 00:13:39.249 [2024-11-29 15:55:50.453081] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:39.249 [2024-11-29 15:55:50.453380] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:39.249 [2024-11-29 15:55:50.453388] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:39.249 [2024-11-29 15:55:50.453393] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:39.249 [2024-11-29 15:55:50.461008] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:39.249 [2024-11-29 15:55:50.461026] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:39.249 [2024-11-29 15:55:50.468997] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:39.249 [2024-11-29 15:55:50.469513] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:39.249 [2024-11-29 15:55:50.482000] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:39.249 15:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.249 15:55:50 -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:39.249 15:55:50 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:39.249 15:55:50 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:39.249 15:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.249 15:55:50 -- common/autotest_common.sh@10 -- # set +x 00:13:39.249 15:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.249 15:55:50 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:39.249 15:55:50 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:39.249 15:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.249 15:55:50 -- common/autotest_common.sh@10 -- # set +x 00:13:39.249 [2024-11-29 15:55:50.649102] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:39.249 [2024-11-29 15:55:50.649397] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:39.249 [2024-11-29 15:55:50.649407] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:39.249 [2024-11-29 15:55:50.649415] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:39.249 [2024-11-29 15:55:50.657007] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:39.249 [2024-11-29 15:55:50.657027] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:39.249 [2024-11-29 15:55:50.664997] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:39.249 [2024-11-29 15:55:50.665517] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:39.249 [2024-11-29 15:55:50.674013] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:39.508 15:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.508 15:55:50 -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:39.508 15:55:50 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:39.508 15:55:50 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:39.508 15:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.508 15:55:50 -- common/autotest_common.sh@10 -- # set +x 00:13:39.508 15:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.508 15:55:50 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:39.508 15:55:50 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:39.508 15:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.508 15:55:50 -- common/autotest_common.sh@10 -- # set +x 00:13:39.508 [2024-11-29 15:55:50.841085] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:39.508 [2024-11-29 15:55:50.841379] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:39.508 [2024-11-29 15:55:50.841386] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:39.508 [2024-11-29 15:55:50.841391] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:39.508 [2024-11-29 15:55:50.849016] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:39.508 [2024-11-29 15:55:50.849033] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:39.508 [2024-11-29 15:55:50.856998] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:39.508 [2024-11-29 15:55:50.857500] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:39.508 [2024-11-29 15:55:50.866024] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:39.508 15:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.508 15:55:50 -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:39.508 15:55:50 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:39.508 15:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.508 15:55:50 -- common/autotest_common.sh@10 -- # set +x 00:13:39.508 15:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.508 15:55:50 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:39.508 { 00:13:39.508 "ublk_device": "/dev/ublkb0", 00:13:39.508 "id": 0, 00:13:39.508 "queue_depth": 512, 00:13:39.508 "num_queues": 4, 00:13:39.508 "bdev_name": "Malloc0" 00:13:39.508 }, 00:13:39.508 { 00:13:39.508 "ublk_device": "/dev/ublkb1", 00:13:39.508 "id": 1, 00:13:39.508 "queue_depth": 512, 00:13:39.508 "num_queues": 4, 00:13:39.508 "bdev_name": "Malloc1" 00:13:39.508 }, 00:13:39.508 { 00:13:39.508 "ublk_device": "/dev/ublkb2", 00:13:39.508 "id": 2, 00:13:39.508 "queue_depth": 512, 00:13:39.508 "num_queues": 4, 00:13:39.508 "bdev_name": "Malloc2" 00:13:39.508 }, 00:13:39.508 { 00:13:39.508 "ublk_device": "/dev/ublkb3", 00:13:39.508 "id": 3, 00:13:39.508 "queue_depth": 512, 00:13:39.508 "num_queues": 4, 00:13:39.508 "bdev_name": "Malloc3" 00:13:39.508 } 00:13:39.508 ]' 00:13:39.508 15:55:50 -- ublk/ublk.sh@72 -- # seq 0 3 00:13:39.508 15:55:50 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:39.508 15:55:50 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:39.508 15:55:50 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:39.508 15:55:50 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:39.767 15:55:50 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:39.767 15:55:50 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:39.767 15:55:50 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:39.767 15:55:50 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:39.767 15:55:51 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:39.767 15:55:51 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:39.767 15:55:51 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:39.767 15:55:51 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:39.767 15:55:51 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:39.767 15:55:51 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:39.767 15:55:51 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:39.767 15:55:51 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:39.767 15:55:51 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:39.767 15:55:51 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:39.767 15:55:51 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:39.767 15:55:51 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:39.767 15:55:51 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:40.034 15:55:51 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:40.034 15:55:51 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:40.034 15:55:51 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:40.034 15:55:51 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:40.034 15:55:51 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:40.034 15:55:51 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:40.034 15:55:51 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:40.034 15:55:51 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:40.034 15:55:51 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:40.034 15:55:51 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:40.034 15:55:51 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:40.034 15:55:51 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:40.034 15:55:51 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:40.034 15:55:51 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:40.034 15:55:51 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:40.034 15:55:51 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:40.034 15:55:51 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:40.034 15:55:51 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:40.034 15:55:51 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:40.034 15:55:51 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:40.318 15:55:51 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:40.318 15:55:51 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:40.318 15:55:51 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:40.318 15:55:51 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:40.318 15:55:51 -- ublk/ublk.sh@85 -- # seq 0 3 00:13:40.318 15:55:51 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:40.318 15:55:51 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:40.318 15:55:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.318 15:55:51 -- common/autotest_common.sh@10 -- # set +x 00:13:40.318 [2024-11-29 15:55:51.513062] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:40.318 [2024-11-29 15:55:51.553037] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:40.318 [2024-11-29 15:55:51.553849] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:40.318 [2024-11-29 15:55:51.561011] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:40.318 [2024-11-29 15:55:51.561252] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:40.318 [2024-11-29 15:55:51.561266] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:40.318 15:55:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.318 15:55:51 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:40.318 15:55:51 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:40.318 15:55:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.318 15:55:51 -- common/autotest_common.sh@10 -- # set +x 00:13:40.318 [2024-11-29 15:55:51.577055] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:40.318 [2024-11-29 15:55:51.617042] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:40.318 [2024-11-29 15:55:51.617845] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:40.318 [2024-11-29 15:55:51.626035] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:40.318 [2024-11-29 15:55:51.626266] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:40.318 [2024-11-29 15:55:51.626278] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:40.318 15:55:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.318 15:55:51 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:40.318 15:55:51 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:40.318 15:55:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.318 15:55:51 -- common/autotest_common.sh@10 -- # set +x 00:13:40.318 [2024-11-29 15:55:51.641050] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:40.318 [2024-11-29 15:55:51.672509] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:40.318 [2024-11-29 15:55:51.673632] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:40.318 [2024-11-29 15:55:51.681023] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:40.318 [2024-11-29 15:55:51.681272] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:40.318 [2024-11-29 15:55:51.681287] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:40.318 15:55:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.318 15:55:51 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:40.318 15:55:51 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:40.318 15:55:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.318 15:55:51 -- common/autotest_common.sh@10 -- # set +x 00:13:40.318 [2024-11-29 15:55:51.697049] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:40.318 [2024-11-29 15:55:51.730536] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:40.318 [2024-11-29 15:55:51.731551] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:40.601 [2024-11-29 15:55:51.737009] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:40.601 [2024-11-29 15:55:51.737259] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:40.601 [2024-11-29 15:55:51.737271] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:40.601 15:55:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.601 15:55:51 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:40.601 [2024-11-29 15:55:51.921063] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:40.601 [2024-11-29 15:55:51.928989] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:40.601 [2024-11-29 15:55:51.929016] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:40.601 15:55:51 -- ublk/ublk.sh@93 -- # seq 0 3 00:13:40.601 15:55:51 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:40.601 15:55:51 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:40.601 15:55:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.601 15:55:51 -- common/autotest_common.sh@10 -- # set +x 00:13:41.174 15:55:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.174 15:55:52 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:41.174 15:55:52 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:41.174 15:55:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.174 15:55:52 -- common/autotest_common.sh@10 -- # set +x 00:13:41.432 15:55:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.432 15:55:52 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:41.432 15:55:52 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:41.432 15:55:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.432 15:55:52 -- common/autotest_common.sh@10 -- # set +x 00:13:41.691 15:55:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.691 15:55:52 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:41.691 15:55:52 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:41.691 15:55:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.691 15:55:52 -- common/autotest_common.sh@10 -- # set +x 00:13:41.691 15:55:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.691 15:55:53 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:41.691 15:55:53 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:41.691 15:55:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.691 15:55:53 -- common/autotest_common.sh@10 -- # set +x 00:13:41.691 15:55:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.691 15:55:53 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:41.691 15:55:53 -- lvol/common.sh@26 -- # jq length 00:13:41.691 15:55:53 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:41.691 15:55:53 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:41.691 15:55:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.691 15:55:53 -- common/autotest_common.sh@10 -- # set +x 00:13:41.691 15:55:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.691 15:55:53 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:41.691 15:55:53 -- lvol/common.sh@28 -- # jq length 00:13:41.691 ************************************ 00:13:41.691 END TEST test_create_multi_ublk 00:13:41.691 ************************************ 00:13:41.691 15:55:53 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:41.691 00:13:41.691 real 0m3.189s 00:13:41.691 user 0m0.789s 00:13:41.691 sys 0m0.129s 00:13:41.691 15:55:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:41.691 15:55:53 -- common/autotest_common.sh@10 -- # set +x 00:13:41.949 15:55:53 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:41.949 15:55:53 -- ublk/ublk.sh@147 -- # cleanup 00:13:41.949 15:55:53 -- ublk/ublk.sh@130 -- # killprocess 69026 00:13:41.949 15:55:53 -- common/autotest_common.sh@936 -- # '[' -z 69026 ']' 00:13:41.949 15:55:53 -- common/autotest_common.sh@940 -- # kill -0 69026 00:13:41.949 15:55:53 -- common/autotest_common.sh@941 -- # uname 00:13:41.949 15:55:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:41.949 15:55:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69026 00:13:41.949 killing process with pid 69026 00:13:41.949 15:55:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:41.949 15:55:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:41.949 15:55:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69026' 00:13:41.949 15:55:53 -- common/autotest_common.sh@955 -- # kill 69026 00:13:41.949 15:55:53 -- common/autotest_common.sh@960 -- # wait 69026 00:13:42.516 [2024-11-29 15:55:53.701658] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:42.516 [2024-11-29 15:55:53.701857] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:43.084 00:13:43.084 real 0m25.609s 00:13:43.084 user 0m37.494s 00:13:43.084 sys 0m8.556s 00:13:43.084 ************************************ 00:13:43.084 END TEST ublk 00:13:43.084 ************************************ 00:13:43.084 15:55:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:43.084 15:55:54 -- common/autotest_common.sh@10 -- # set +x 00:13:43.084 15:55:54 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:43.084 15:55:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:43.084 15:55:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.084 15:55:54 -- common/autotest_common.sh@10 -- # set +x 00:13:43.084 ************************************ 00:13:43.084 START TEST ublk_recovery 00:13:43.084 ************************************ 00:13:43.084 15:55:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:43.084 * Looking for test storage... 00:13:43.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:43.084 15:55:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:43.084 15:55:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:43.084 15:55:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:43.344 15:55:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:43.344 15:55:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:43.344 15:55:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:43.344 15:55:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:43.344 15:55:54 -- scripts/common.sh@335 -- # IFS=.-: 00:13:43.344 15:55:54 -- scripts/common.sh@335 -- # read -ra ver1 00:13:43.344 15:55:54 -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.344 15:55:54 -- scripts/common.sh@336 -- # read -ra ver2 00:13:43.344 15:55:54 -- scripts/common.sh@337 -- # local 'op=<' 00:13:43.344 15:55:54 -- scripts/common.sh@339 -- # ver1_l=2 00:13:43.344 15:55:54 -- scripts/common.sh@340 -- # ver2_l=1 00:13:43.344 15:55:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:43.344 15:55:54 -- scripts/common.sh@343 -- # case "$op" in 00:13:43.344 15:55:54 -- scripts/common.sh@344 -- # : 1 00:13:43.344 15:55:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:43.344 15:55:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.344 15:55:54 -- scripts/common.sh@364 -- # decimal 1 00:13:43.344 15:55:54 -- scripts/common.sh@352 -- # local d=1 00:13:43.344 15:55:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.344 15:55:54 -- scripts/common.sh@354 -- # echo 1 00:13:43.344 15:55:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:43.344 15:55:54 -- scripts/common.sh@365 -- # decimal 2 00:13:43.344 15:55:54 -- scripts/common.sh@352 -- # local d=2 00:13:43.344 15:55:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.344 15:55:54 -- scripts/common.sh@354 -- # echo 2 00:13:43.344 15:55:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:43.344 15:55:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:43.344 15:55:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:43.344 15:55:54 -- scripts/common.sh@367 -- # return 0 00:13:43.344 15:55:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.344 15:55:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:43.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.344 --rc genhtml_branch_coverage=1 00:13:43.344 --rc genhtml_function_coverage=1 00:13:43.344 --rc genhtml_legend=1 00:13:43.344 --rc geninfo_all_blocks=1 00:13:43.344 --rc geninfo_unexecuted_blocks=1 00:13:43.344 00:13:43.344 ' 00:13:43.344 15:55:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:43.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.344 --rc genhtml_branch_coverage=1 00:13:43.344 --rc genhtml_function_coverage=1 00:13:43.344 --rc genhtml_legend=1 00:13:43.344 --rc geninfo_all_blocks=1 00:13:43.344 --rc geninfo_unexecuted_blocks=1 00:13:43.344 00:13:43.344 ' 00:13:43.344 15:55:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:43.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.344 --rc genhtml_branch_coverage=1 00:13:43.344 --rc genhtml_function_coverage=1 00:13:43.344 --rc genhtml_legend=1 00:13:43.344 --rc geninfo_all_blocks=1 00:13:43.344 --rc geninfo_unexecuted_blocks=1 00:13:43.344 00:13:43.344 ' 00:13:43.344 15:55:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:43.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.344 --rc genhtml_branch_coverage=1 00:13:43.344 --rc genhtml_function_coverage=1 00:13:43.344 --rc genhtml_legend=1 00:13:43.344 --rc geninfo_all_blocks=1 00:13:43.344 --rc geninfo_unexecuted_blocks=1 00:13:43.344 00:13:43.344 ' 00:13:43.344 15:55:54 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:43.344 15:55:54 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:43.344 15:55:54 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:43.344 15:55:54 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:43.344 15:55:54 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:43.344 15:55:54 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:43.344 15:55:54 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:43.344 15:55:54 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:43.344 15:55:54 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:43.344 15:55:54 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:43.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.344 15:55:54 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=69425 00:13:43.344 15:55:54 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:43.344 15:55:54 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 69425 00:13:43.344 15:55:54 -- common/autotest_common.sh@829 -- # '[' -z 69425 ']' 00:13:43.344 15:55:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.344 15:55:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.344 15:55:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.344 15:55:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.344 15:55:54 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:43.344 15:55:54 -- common/autotest_common.sh@10 -- # set +x 00:13:43.344 [2024-11-29 15:55:54.629397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:43.344 [2024-11-29 15:55:54.629534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69425 ] 00:13:43.603 [2024-11-29 15:55:54.781129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:43.603 [2024-11-29 15:55:54.921045] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:43.603 [2024-11-29 15:55:54.921460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.603 [2024-11-29 15:55:54.921595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.177 15:55:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.177 15:55:55 -- common/autotest_common.sh@862 -- # return 0 00:13:44.177 15:55:55 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:44.177 15:55:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.177 15:55:55 -- common/autotest_common.sh@10 -- # set +x 00:13:44.177 [2024-11-29 15:55:55.444444] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:44.177 15:55:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.177 15:55:55 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:44.177 15:55:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.177 15:55:55 -- common/autotest_common.sh@10 -- # set +x 00:13:44.177 malloc0 00:13:44.177 15:55:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.177 15:55:55 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:44.177 15:55:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.177 15:55:55 -- common/autotest_common.sh@10 -- # set +x 00:13:44.177 [2024-11-29 15:55:55.531107] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:44.177 [2024-11-29 15:55:55.531191] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:44.177 [2024-11-29 15:55:55.531197] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:44.177 [2024-11-29 15:55:55.531204] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:44.177 [2024-11-29 15:55:55.540068] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:44.177 [2024-11-29 15:55:55.540087] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:44.177 [2024-11-29 15:55:55.547001] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:44.177 [2024-11-29 15:55:55.547115] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:44.177 [2024-11-29 15:55:55.569995] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:44.177 1 00:13:44.177 15:55:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.177 15:55:55 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:45.551 15:55:56 -- ublk/ublk_recovery.sh@31 -- # fio_proc=69460 00:13:45.551 15:55:56 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:45.551 15:55:56 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:45.551 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:45.551 fio-3.35 00:13:45.551 Starting 1 process 00:13:50.820 15:56:01 -- ublk/ublk_recovery.sh@36 -- # kill -9 69425 00:13:50.820 15:56:01 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:56.107 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 69425 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:56.107 15:56:06 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=69575 00:13:56.107 15:56:06 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:56.107 15:56:06 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:56.107 15:56:06 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 69575 00:13:56.107 15:56:06 -- common/autotest_common.sh@829 -- # '[' -z 69575 ']' 00:13:56.107 15:56:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.107 15:56:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.107 15:56:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.107 15:56:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.107 15:56:06 -- common/autotest_common.sh@10 -- # set +x 00:13:56.107 [2024-11-29 15:56:06.675591] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:56.107 [2024-11-29 15:56:06.675739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69575 ] 00:13:56.107 [2024-11-29 15:56:06.830907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:56.107 [2024-11-29 15:56:07.102598] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:56.107 [2024-11-29 15:56:07.103147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.107 [2024-11-29 15:56:07.105194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.051 15:56:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.051 15:56:08 -- common/autotest_common.sh@862 -- # return 0 00:13:57.051 15:56:08 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:57.051 15:56:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.051 15:56:08 -- common/autotest_common.sh@10 -- # set +x 00:13:57.051 [2024-11-29 15:56:08.179831] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:57.051 15:56:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.051 15:56:08 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:57.051 15:56:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.051 15:56:08 -- common/autotest_common.sh@10 -- # set +x 00:13:57.051 malloc0 00:13:57.051 15:56:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.051 15:56:08 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:57.051 15:56:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.051 15:56:08 -- common/autotest_common.sh@10 -- # set +x 00:13:57.051 [2024-11-29 15:56:08.282116] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:57.051 [2024-11-29 15:56:08.282155] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:57.051 [2024-11-29 15:56:08.282164] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:57.051 [2024-11-29 15:56:08.290051] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:57.051 [2024-11-29 15:56:08.290071] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:57.051 [2024-11-29 15:56:08.290147] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:57.051 1 00:13:57.051 15:56:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.051 15:56:08 -- ublk/ublk_recovery.sh@52 -- # wait 69460 00:14:23.599 [2024-11-29 15:56:31.896998] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:23.599 [2024-11-29 15:56:31.904123] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:23.599 [2024-11-29 15:56:31.912180] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:23.599 [2024-11-29 15:56:31.912204] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:45.521 00:14:45.521 fio_test: (groupid=0, jobs=1): err= 0: pid=69463: Fri Nov 29 15:56:56 2024 00:14:45.521 read: IOPS=14.9k, BW=58.2MiB/s (61.0MB/s)(3491MiB/60002msec) 00:14:45.521 slat (nsec): min=1164, max=122147, avg=4852.04, stdev=1262.39 00:14:45.521 clat (usec): min=1088, max=30336k, avg=4016.95, stdev=242237.09 00:14:45.521 lat (usec): min=1097, max=30336k, avg=4021.80, stdev=242237.09 00:14:45.521 clat percentiles (usec): 00:14:45.521 | 1.00th=[ 1729], 5.00th=[ 1860], 10.00th=[ 1893], 20.00th=[ 1909], 00:14:45.521 | 30.00th=[ 1926], 40.00th=[ 1942], 50.00th=[ 1958], 60.00th=[ 1975], 00:14:45.521 | 70.00th=[ 1991], 80.00th=[ 2008], 90.00th=[ 2040], 95.00th=[ 2900], 00:14:45.521 | 99.00th=[ 5145], 99.50th=[ 5538], 99.90th=[ 7177], 99.95th=[ 8291], 00:14:45.521 | 99.99th=[13304] 00:14:45.521 bw ( KiB/s): min=44704, max=125384, per=100.00%, avg=119276.75, stdev=14440.86, samples=59 00:14:45.521 iops : min=11176, max=31346, avg=29819.19, stdev=3610.22, samples=59 00:14:45.521 write: IOPS=14.9k, BW=58.1MiB/s (60.9MB/s)(3486MiB/60002msec); 0 zone resets 00:14:45.521 slat (nsec): min=1147, max=166878, avg=4874.88, stdev=1247.15 00:14:45.521 clat (usec): min=993, max=30336k, avg=4572.38, stdev=270547.23 00:14:45.521 lat (usec): min=998, max=30336k, avg=4577.26, stdev=270547.23 00:14:45.521 clat percentiles (usec): 00:14:45.521 | 1.00th=[ 1778], 5.00th=[ 1942], 10.00th=[ 1975], 20.00th=[ 2008], 00:14:45.521 | 30.00th=[ 2024], 40.00th=[ 2040], 50.00th=[ 2040], 60.00th=[ 2057], 00:14:45.521 | 70.00th=[ 2073], 80.00th=[ 2089], 90.00th=[ 2147], 95.00th=[ 2868], 00:14:45.521 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 7242], 99.95th=[ 8291], 00:14:45.522 | 99.99th=[13566] 00:14:45.522 bw ( KiB/s): min=44056, max=125392, per=100.00%, avg=119089.90, stdev=14601.44, samples=59 00:14:45.522 iops : min=11014, max=31348, avg=29772.47, stdev=3650.36, samples=59 00:14:45.522 lat (usec) : 1000=0.01% 00:14:45.522 lat (msec) : 2=48.73%, 4=48.46%, 10=2.77%, 20=0.03%, >=2000=0.01% 00:14:45.522 cpu : usr=3.37%, sys=14.83%, ctx=59140, majf=0, minf=13 00:14:45.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:45.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:45.522 issued rwts: total=893725,892432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:45.522 00:14:45.522 Run status group 0 (all jobs): 00:14:45.522 READ: bw=58.2MiB/s (61.0MB/s), 58.2MiB/s-58.2MiB/s (61.0MB/s-61.0MB/s), io=3491MiB (3661MB), run=60002-60002msec 00:14:45.522 WRITE: bw=58.1MiB/s (60.9MB/s), 58.1MiB/s-58.1MiB/s (60.9MB/s-60.9MB/s), io=3486MiB (3655MB), run=60002-60002msec 00:14:45.522 00:14:45.522 Disk stats (read/write): 00:14:45.522 ublkb1: ios=890434/889128, merge=0/0, ticks=3538427/3956879, in_queue=7495306, util=99.90% 00:14:45.522 15:56:56 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:45.522 15:56:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.522 15:56:56 -- common/autotest_common.sh@10 -- # set +x 00:14:45.522 [2024-11-29 15:56:56.829395] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:45.522 [2024-11-29 15:56:56.871010] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:45.522 [2024-11-29 15:56:56.871168] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:45.522 [2024-11-29 15:56:56.879007] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:45.522 [2024-11-29 15:56:56.879157] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:45.522 [2024-11-29 15:56:56.879181] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:45.522 15:56:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.522 15:56:56 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:45.522 15:56:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.522 15:56:56 -- common/autotest_common.sh@10 -- # set +x 00:14:45.522 [2024-11-29 15:56:56.893055] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:45.522 [2024-11-29 15:56:56.897206] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:45.522 [2024-11-29 15:56:56.897234] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:45.522 15:56:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.522 15:56:56 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:45.522 15:56:56 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:45.522 15:56:56 -- ublk/ublk_recovery.sh@14 -- # killprocess 69575 00:14:45.522 15:56:56 -- common/autotest_common.sh@936 -- # '[' -z 69575 ']' 00:14:45.522 15:56:56 -- common/autotest_common.sh@940 -- # kill -0 69575 00:14:45.522 15:56:56 -- common/autotest_common.sh@941 -- # uname 00:14:45.522 15:56:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:45.522 15:56:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69575 00:14:45.522 killing process with pid 69575 00:14:45.522 15:56:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:45.522 15:56:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:45.522 15:56:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69575' 00:14:45.522 15:56:56 -- common/autotest_common.sh@955 -- # kill 69575 00:14:45.522 15:56:56 -- common/autotest_common.sh@960 -- # wait 69575 00:14:46.897 [2024-11-29 15:56:57.945594] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:46.897 [2024-11-29 15:56:57.945641] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:47.466 00:14:47.466 real 1m4.241s 00:14:47.466 user 1m47.899s 00:14:47.466 sys 0m20.659s 00:14:47.466 15:56:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:47.466 15:56:58 -- common/autotest_common.sh@10 -- # set +x 00:14:47.466 ************************************ 00:14:47.466 END TEST ublk_recovery 00:14:47.466 ************************************ 00:14:47.466 15:56:58 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:14:47.466 15:56:58 -- spdk/autotest.sh@255 -- # timing_exit lib 00:14:47.466 15:56:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:47.466 15:56:58 -- common/autotest_common.sh@10 -- # set +x 00:14:47.466 15:56:58 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:14:47.466 15:56:58 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:14:47.466 15:56:58 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:14:47.466 15:56:58 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:14:47.466 15:56:58 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:14:47.466 15:56:58 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:14:47.466 15:56:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:47.466 15:56:58 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:47.466 15:56:58 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:14:47.466 15:56:58 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:14:47.466 15:56:58 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:47.466 15:56:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:47.466 15:56:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:47.466 15:56:58 -- common/autotest_common.sh@10 -- # set +x 00:14:47.466 ************************************ 00:14:47.467 START TEST ftl 00:14:47.467 ************************************ 00:14:47.467 15:56:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:47.467 * Looking for test storage... 00:14:47.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:47.467 15:56:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:47.467 15:56:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:47.467 15:56:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:47.467 15:56:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:47.467 15:56:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:47.467 15:56:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:47.467 15:56:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:47.467 15:56:58 -- scripts/common.sh@335 -- # IFS=.-: 00:14:47.467 15:56:58 -- scripts/common.sh@335 -- # read -ra ver1 00:14:47.467 15:56:58 -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.467 15:56:58 -- scripts/common.sh@336 -- # read -ra ver2 00:14:47.467 15:56:58 -- scripts/common.sh@337 -- # local 'op=<' 00:14:47.467 15:56:58 -- scripts/common.sh@339 -- # ver1_l=2 00:14:47.467 15:56:58 -- scripts/common.sh@340 -- # ver2_l=1 00:14:47.467 15:56:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:47.467 15:56:58 -- scripts/common.sh@343 -- # case "$op" in 00:14:47.467 15:56:58 -- scripts/common.sh@344 -- # : 1 00:14:47.467 15:56:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:47.467 15:56:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.467 15:56:58 -- scripts/common.sh@364 -- # decimal 1 00:14:47.467 15:56:58 -- scripts/common.sh@352 -- # local d=1 00:14:47.467 15:56:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.467 15:56:58 -- scripts/common.sh@354 -- # echo 1 00:14:47.467 15:56:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:47.467 15:56:58 -- scripts/common.sh@365 -- # decimal 2 00:14:47.467 15:56:58 -- scripts/common.sh@352 -- # local d=2 00:14:47.467 15:56:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.467 15:56:58 -- scripts/common.sh@354 -- # echo 2 00:14:47.467 15:56:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:47.467 15:56:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:47.467 15:56:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:47.467 15:56:58 -- scripts/common.sh@367 -- # return 0 00:14:47.467 15:56:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.467 15:56:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:47.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.467 --rc genhtml_branch_coverage=1 00:14:47.467 --rc genhtml_function_coverage=1 00:14:47.467 --rc genhtml_legend=1 00:14:47.467 --rc geninfo_all_blocks=1 00:14:47.467 --rc geninfo_unexecuted_blocks=1 00:14:47.467 00:14:47.467 ' 00:14:47.467 15:56:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:47.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.467 --rc genhtml_branch_coverage=1 00:14:47.467 --rc genhtml_function_coverage=1 00:14:47.467 --rc genhtml_legend=1 00:14:47.467 --rc geninfo_all_blocks=1 00:14:47.467 --rc geninfo_unexecuted_blocks=1 00:14:47.467 00:14:47.467 ' 00:14:47.467 15:56:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:47.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.467 --rc genhtml_branch_coverage=1 00:14:47.467 --rc genhtml_function_coverage=1 00:14:47.467 --rc genhtml_legend=1 00:14:47.467 --rc geninfo_all_blocks=1 00:14:47.467 --rc geninfo_unexecuted_blocks=1 00:14:47.467 00:14:47.467 ' 00:14:47.467 15:56:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:47.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.467 --rc genhtml_branch_coverage=1 00:14:47.467 --rc genhtml_function_coverage=1 00:14:47.467 --rc genhtml_legend=1 00:14:47.467 --rc geninfo_all_blocks=1 00:14:47.467 --rc geninfo_unexecuted_blocks=1 00:14:47.467 00:14:47.467 ' 00:14:47.467 15:56:58 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:47.467 15:56:58 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:47.467 15:56:58 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:47.727 15:56:58 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:47.727 15:56:58 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:47.727 15:56:58 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:47.727 15:56:58 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:47.727 15:56:58 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:47.727 15:56:58 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:47.727 15:56:58 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:47.727 15:56:58 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:47.727 15:56:58 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:47.727 15:56:58 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:47.727 15:56:58 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:47.727 15:56:58 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:47.727 15:56:58 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:47.727 15:56:58 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:47.727 15:56:58 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:47.727 15:56:58 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:47.727 15:56:58 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:47.727 15:56:58 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:47.727 15:56:58 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:47.727 15:56:58 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:47.727 15:56:58 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:47.727 15:56:58 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:47.727 15:56:58 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:47.727 15:56:58 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:47.727 15:56:58 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:47.727 15:56:58 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:47.727 15:56:58 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:47.727 15:56:58 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:47.727 15:56:58 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:47.727 15:56:58 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:47.727 15:56:58 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:47.727 15:56:58 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:47.987 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:47.987 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:47.987 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:47.987 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:47.987 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:47.987 15:56:59 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=70386 00:14:47.987 15:56:59 -- ftl/ftl.sh@38 -- # waitforlisten 70386 00:14:47.987 15:56:59 -- common/autotest_common.sh@829 -- # '[' -z 70386 ']' 00:14:47.987 15:56:59 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:47.987 15:56:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.987 15:56:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.987 15:56:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.987 15:56:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.987 15:56:59 -- common/autotest_common.sh@10 -- # set +x 00:14:48.247 [2024-11-29 15:56:59.454388] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:48.247 [2024-11-29 15:56:59.454490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70386 ] 00:14:48.247 [2024-11-29 15:56:59.596460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.505 [2024-11-29 15:56:59.737902] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:48.505 [2024-11-29 15:56:59.738065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.071 15:57:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.071 15:57:00 -- common/autotest_common.sh@862 -- # return 0 00:14:49.071 15:57:00 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:49.071 15:57:00 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:49.638 15:57:00 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:49.639 15:57:00 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:50.204 15:57:01 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:50.204 15:57:01 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:50.204 15:57:01 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:50.204 15:57:01 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:14:50.204 15:57:01 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:50.204 15:57:01 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:14:50.204 15:57:01 -- ftl/ftl.sh@50 -- # break 00:14:50.204 15:57:01 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:14:50.204 15:57:01 -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:50.204 15:57:01 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:50.204 15:57:01 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:50.462 15:57:01 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:14:50.462 15:57:01 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:50.462 15:57:01 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:14:50.462 15:57:01 -- ftl/ftl.sh@63 -- # break 00:14:50.462 15:57:01 -- ftl/ftl.sh@66 -- # killprocess 70386 00:14:50.462 15:57:01 -- common/autotest_common.sh@936 -- # '[' -z 70386 ']' 00:14:50.462 15:57:01 -- common/autotest_common.sh@940 -- # kill -0 70386 00:14:50.462 15:57:01 -- common/autotest_common.sh@941 -- # uname 00:14:50.462 15:57:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.462 15:57:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70386 00:14:50.462 15:57:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:50.462 15:57:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:50.462 killing process with pid 70386 00:14:50.462 15:57:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70386' 00:14:50.462 15:57:01 -- common/autotest_common.sh@955 -- # kill 70386 00:14:50.462 15:57:01 -- common/autotest_common.sh@960 -- # wait 70386 00:14:51.836 15:57:02 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:14:51.836 15:57:02 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:14:51.836 15:57:02 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:51.836 15:57:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:51.836 15:57:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:51.836 15:57:02 -- common/autotest_common.sh@10 -- # set +x 00:14:51.836 ************************************ 00:14:51.836 START TEST ftl_fio_basic 00:14:51.836 ************************************ 00:14:51.836 15:57:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:51.836 * Looking for test storage... 00:14:51.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:51.836 15:57:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:51.836 15:57:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:51.836 15:57:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:51.836 15:57:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:51.836 15:57:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:51.836 15:57:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:51.836 15:57:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:51.836 15:57:03 -- scripts/common.sh@335 -- # IFS=.-: 00:14:51.836 15:57:03 -- scripts/common.sh@335 -- # read -ra ver1 00:14:51.836 15:57:03 -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.836 15:57:03 -- scripts/common.sh@336 -- # read -ra ver2 00:14:51.836 15:57:03 -- scripts/common.sh@337 -- # local 'op=<' 00:14:51.836 15:57:03 -- scripts/common.sh@339 -- # ver1_l=2 00:14:51.836 15:57:03 -- scripts/common.sh@340 -- # ver2_l=1 00:14:51.836 15:57:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:51.836 15:57:03 -- scripts/common.sh@343 -- # case "$op" in 00:14:51.836 15:57:03 -- scripts/common.sh@344 -- # : 1 00:14:51.836 15:57:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:51.836 15:57:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.836 15:57:03 -- scripts/common.sh@364 -- # decimal 1 00:14:51.836 15:57:03 -- scripts/common.sh@352 -- # local d=1 00:14:51.836 15:57:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.836 15:57:03 -- scripts/common.sh@354 -- # echo 1 00:14:51.836 15:57:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:51.836 15:57:03 -- scripts/common.sh@365 -- # decimal 2 00:14:51.836 15:57:03 -- scripts/common.sh@352 -- # local d=2 00:14:51.836 15:57:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.836 15:57:03 -- scripts/common.sh@354 -- # echo 2 00:14:51.836 15:57:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:51.836 15:57:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:51.836 15:57:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:51.836 15:57:03 -- scripts/common.sh@367 -- # return 0 00:14:51.836 15:57:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.836 15:57:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:51.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.836 --rc genhtml_branch_coverage=1 00:14:51.836 --rc genhtml_function_coverage=1 00:14:51.836 --rc genhtml_legend=1 00:14:51.836 --rc geninfo_all_blocks=1 00:14:51.836 --rc geninfo_unexecuted_blocks=1 00:14:51.836 00:14:51.836 ' 00:14:51.837 15:57:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:51.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.837 --rc genhtml_branch_coverage=1 00:14:51.837 --rc genhtml_function_coverage=1 00:14:51.837 --rc genhtml_legend=1 00:14:51.837 --rc geninfo_all_blocks=1 00:14:51.837 --rc geninfo_unexecuted_blocks=1 00:14:51.837 00:14:51.837 ' 00:14:51.837 15:57:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:51.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.837 --rc genhtml_branch_coverage=1 00:14:51.837 --rc genhtml_function_coverage=1 00:14:51.837 --rc genhtml_legend=1 00:14:51.837 --rc geninfo_all_blocks=1 00:14:51.837 --rc geninfo_unexecuted_blocks=1 00:14:51.837 00:14:51.837 ' 00:14:51.837 15:57:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:51.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.837 --rc genhtml_branch_coverage=1 00:14:51.837 --rc genhtml_function_coverage=1 00:14:51.837 --rc genhtml_legend=1 00:14:51.837 --rc geninfo_all_blocks=1 00:14:51.837 --rc geninfo_unexecuted_blocks=1 00:14:51.837 00:14:51.837 ' 00:14:51.837 15:57:03 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:51.837 15:57:03 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:51.837 15:57:03 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:51.837 15:57:03 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:51.837 15:57:03 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:51.837 15:57:03 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:51.837 15:57:03 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.837 15:57:03 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:51.837 15:57:03 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:51.837 15:57:03 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:51.837 15:57:03 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:51.837 15:57:03 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:51.837 15:57:03 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:51.837 15:57:03 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:51.837 15:57:03 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:51.837 15:57:03 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:51.837 15:57:03 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:51.837 15:57:03 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:51.837 15:57:03 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:51.837 15:57:03 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:51.837 15:57:03 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:51.837 15:57:03 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:51.837 15:57:03 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:51.837 15:57:03 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:51.837 15:57:03 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:51.837 15:57:03 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:51.837 15:57:03 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:51.837 15:57:03 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:51.837 15:57:03 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:51.837 15:57:03 -- ftl/fio.sh@11 -- # declare -A suite 00:14:51.837 15:57:03 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:51.837 15:57:03 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:51.837 15:57:03 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:51.837 15:57:03 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.837 15:57:03 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:14:51.837 15:57:03 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:14:51.837 15:57:03 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:51.837 15:57:03 -- ftl/fio.sh@26 -- # uuid= 00:14:51.837 15:57:03 -- ftl/fio.sh@27 -- # timeout=240 00:14:51.837 15:57:03 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:51.837 15:57:03 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:51.837 15:57:03 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:51.837 15:57:03 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:51.837 15:57:03 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:51.837 15:57:03 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:51.837 15:57:03 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:51.837 15:57:03 -- ftl/fio.sh@45 -- # svcpid=70511 00:14:51.837 15:57:03 -- ftl/fio.sh@46 -- # waitforlisten 70511 00:14:51.837 15:57:03 -- common/autotest_common.sh@829 -- # '[' -z 70511 ']' 00:14:51.837 15:57:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.837 15:57:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.837 15:57:03 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:51.837 15:57:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.837 15:57:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.837 15:57:03 -- common/autotest_common.sh@10 -- # set +x 00:14:51.837 [2024-11-29 15:57:03.234772] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:51.837 [2024-11-29 15:57:03.235047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70511 ] 00:14:52.099 [2024-11-29 15:57:03.372604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:52.360 [2024-11-29 15:57:03.590605] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:52.360 [2024-11-29 15:57:03.591335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.360 [2024-11-29 15:57:03.591761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.360 [2024-11-29 15:57:03.591866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.393 15:57:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.393 15:57:04 -- common/autotest_common.sh@862 -- # return 0 00:14:53.393 15:57:04 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:53.393 15:57:04 -- ftl/common.sh@54 -- # local name=nvme0 00:14:53.393 15:57:04 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:53.393 15:57:04 -- ftl/common.sh@56 -- # local size=103424 00:14:53.393 15:57:04 -- ftl/common.sh@59 -- # local base_bdev 00:14:53.393 15:57:04 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:53.655 15:57:05 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:53.655 15:57:05 -- ftl/common.sh@62 -- # local base_size 00:14:53.655 15:57:05 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:53.655 15:57:05 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:53.655 15:57:05 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:53.655 15:57:05 -- common/autotest_common.sh@1369 -- # local bs 00:14:53.655 15:57:05 -- common/autotest_common.sh@1370 -- # local nb 00:14:53.655 15:57:05 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:53.916 15:57:05 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:53.916 { 00:14:53.916 "name": "nvme0n1", 00:14:53.916 "aliases": [ 00:14:53.916 "39161e42-a030-4147-8338-9a567e11d3d0" 00:14:53.916 ], 00:14:53.916 "product_name": "NVMe disk", 00:14:53.916 "block_size": 4096, 00:14:53.916 "num_blocks": 1310720, 00:14:53.916 "uuid": "39161e42-a030-4147-8338-9a567e11d3d0", 00:14:53.916 "assigned_rate_limits": { 00:14:53.916 "rw_ios_per_sec": 0, 00:14:53.916 "rw_mbytes_per_sec": 0, 00:14:53.916 "r_mbytes_per_sec": 0, 00:14:53.916 "w_mbytes_per_sec": 0 00:14:53.916 }, 00:14:53.916 "claimed": false, 00:14:53.916 "zoned": false, 00:14:53.916 "supported_io_types": { 00:14:53.916 "read": true, 00:14:53.916 "write": true, 00:14:53.916 "unmap": true, 00:14:53.916 "write_zeroes": true, 00:14:53.916 "flush": true, 00:14:53.916 "reset": true, 00:14:53.916 "compare": true, 00:14:53.916 "compare_and_write": false, 00:14:53.916 "abort": true, 00:14:53.916 "nvme_admin": true, 00:14:53.916 "nvme_io": true 00:14:53.916 }, 00:14:53.916 "driver_specific": { 00:14:53.916 "nvme": [ 00:14:53.916 { 00:14:53.916 "pci_address": "0000:00:07.0", 00:14:53.916 "trid": { 00:14:53.916 "trtype": "PCIe", 00:14:53.916 "traddr": "0000:00:07.0" 00:14:53.916 }, 00:14:53.916 "ctrlr_data": { 00:14:53.916 "cntlid": 0, 00:14:53.916 "vendor_id": "0x1b36", 00:14:53.916 "model_number": "QEMU NVMe Ctrl", 00:14:53.916 "serial_number": "12341", 00:14:53.916 "firmware_revision": "8.0.0", 00:14:53.916 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:53.916 "oacs": { 00:14:53.916 "security": 0, 00:14:53.916 "format": 1, 00:14:53.916 "firmware": 0, 00:14:53.916 "ns_manage": 1 00:14:53.916 }, 00:14:53.916 "multi_ctrlr": false, 00:14:53.916 "ana_reporting": false 00:14:53.916 }, 00:14:53.916 "vs": { 00:14:53.916 "nvme_version": "1.4" 00:14:53.916 }, 00:14:53.916 "ns_data": { 00:14:53.916 "id": 1, 00:14:53.916 "can_share": false 00:14:53.916 } 00:14:53.916 } 00:14:53.916 ], 00:14:53.916 "mp_policy": "active_passive" 00:14:53.916 } 00:14:53.916 } 00:14:53.916 ]' 00:14:53.916 15:57:05 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:53.916 15:57:05 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:53.916 15:57:05 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:53.916 15:57:05 -- common/autotest_common.sh@1373 -- # nb=1310720 00:14:53.916 15:57:05 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:14:53.916 15:57:05 -- common/autotest_common.sh@1377 -- # echo 5120 00:14:53.916 15:57:05 -- ftl/common.sh@63 -- # base_size=5120 00:14:53.916 15:57:05 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:53.916 15:57:05 -- ftl/common.sh@67 -- # clear_lvols 00:14:53.916 15:57:05 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:53.916 15:57:05 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:54.175 15:57:05 -- ftl/common.sh@28 -- # stores= 00:14:54.175 15:57:05 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:54.434 15:57:05 -- ftl/common.sh@68 -- # lvs=c0492132-f354-4c02-a2ea-2ee54431abfc 00:14:54.434 15:57:05 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c0492132-f354-4c02-a2ea-2ee54431abfc 00:14:54.693 15:57:05 -- ftl/fio.sh@48 -- # split_bdev=5e9d22a6-cb36-450c-ba54-a8fcae7f5344 00:14:54.693 15:57:05 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 5e9d22a6-cb36-450c-ba54-a8fcae7f5344 00:14:54.693 15:57:05 -- ftl/common.sh@35 -- # local name=nvc0 00:14:54.693 15:57:05 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:54.693 15:57:05 -- ftl/common.sh@37 -- # local base_bdev=5e9d22a6-cb36-450c-ba54-a8fcae7f5344 00:14:54.693 15:57:05 -- ftl/common.sh@38 -- # local cache_size= 00:14:54.693 15:57:05 -- ftl/common.sh@41 -- # get_bdev_size 5e9d22a6-cb36-450c-ba54-a8fcae7f5344 00:14:54.693 15:57:05 -- common/autotest_common.sh@1367 -- # local bdev_name=5e9d22a6-cb36-450c-ba54-a8fcae7f5344 00:14:54.693 15:57:05 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:54.693 15:57:05 -- common/autotest_common.sh@1369 -- # local bs 00:14:54.693 15:57:05 -- common/autotest_common.sh@1370 -- # local nb 00:14:54.693 15:57:05 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e9d22a6-cb36-450c-ba54-a8fcae7f5344 00:14:54.693 15:57:06 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:54.693 { 00:14:54.693 "name": "5e9d22a6-cb36-450c-ba54-a8fcae7f5344", 00:14:54.693 "aliases": [ 00:14:54.693 "lvs/nvme0n1p0" 00:14:54.693 ], 00:14:54.693 "product_name": "Logical Volume", 00:14:54.693 "block_size": 4096, 00:14:54.693 "num_blocks": 26476544, 00:14:54.693 "uuid": "5e9d22a6-cb36-450c-ba54-a8fcae7f5344", 00:14:54.693 "assigned_rate_limits": { 00:14:54.693 "rw_ios_per_sec": 0, 00:14:54.693 "rw_mbytes_per_sec": 0, 00:14:54.693 "r_mbytes_per_sec": 0, 00:14:54.693 "w_mbytes_per_sec": 0 00:14:54.693 }, 00:14:54.693 "claimed": false, 00:14:54.693 "zoned": false, 00:14:54.693 "supported_io_types": { 00:14:54.693 "read": true, 00:14:54.693 "write": true, 00:14:54.693 "unmap": true, 00:14:54.693 "write_zeroes": true, 00:14:54.693 "flush": false, 00:14:54.693 "reset": true, 00:14:54.693 "compare": false, 00:14:54.693 "compare_and_write": false, 00:14:54.693 "abort": false, 00:14:54.693 "nvme_admin": false, 00:14:54.693 "nvme_io": false 00:14:54.693 }, 00:14:54.693 "driver_specific": { 00:14:54.693 "lvol": { 00:14:54.693 "lvol_store_uuid": "c0492132-f354-4c02-a2ea-2ee54431abfc", 00:14:54.693 "base_bdev": "nvme0n1", 00:14:54.693 "thin_provision": true, 00:14:54.693 "snapshot": false, 00:14:54.693 "clone": false, 00:14:54.693 "esnap_clone": false 00:14:54.693 } 00:14:54.693 } 00:14:54.693 } 00:14:54.693 ]' 00:14:54.693 15:57:06 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:54.693 15:57:06 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:54.693 15:57:06 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:54.693 15:57:06 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:54.693 15:57:06 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:54.693 15:57:06 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:54.693 15:57:06 -- ftl/common.sh@41 -- # local base_size=5171 00:14:54.693 15:57:06 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:54.693 15:57:06 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:54.953 15:57:06 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:54.953 15:57:06 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:54.953 15:57:06 -- ftl/common.sh@48 -- # get_bdev_size 5e9d22a6-cb36-450c-ba54-a8fcae7f5344 00:14:54.953 15:57:06 -- common/autotest_common.sh@1367 -- # local bdev_name=5e9d22a6-cb36-450c-ba54-a8fcae7f5344 00:14:54.953 15:57:06 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:54.953 15:57:06 -- common/autotest_common.sh@1369 -- # local bs 00:14:54.953 15:57:06 -- common/autotest_common.sh@1370 -- # local nb 00:14:54.953 15:57:06 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e9d22a6-cb36-450c-ba54-a8fcae7f5344 00:14:55.212 15:57:06 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:55.212 { 00:14:55.212 "name": "5e9d22a6-cb36-450c-ba54-a8fcae7f5344", 00:14:55.212 "aliases": [ 00:14:55.212 "lvs/nvme0n1p0" 00:14:55.212 ], 00:14:55.212 "product_name": "Logical Volume", 00:14:55.212 "block_size": 4096, 00:14:55.212 "num_blocks": 26476544, 00:14:55.212 "uuid": "5e9d22a6-cb36-450c-ba54-a8fcae7f5344", 00:14:55.212 "assigned_rate_limits": { 00:14:55.212 "rw_ios_per_sec": 0, 00:14:55.212 "rw_mbytes_per_sec": 0, 00:14:55.212 "r_mbytes_per_sec": 0, 00:14:55.212 "w_mbytes_per_sec": 0 00:14:55.212 }, 00:14:55.212 "claimed": false, 00:14:55.212 "zoned": false, 00:14:55.212 "supported_io_types": { 00:14:55.212 "read": true, 00:14:55.212 "write": true, 00:14:55.212 "unmap": true, 00:14:55.212 "write_zeroes": true, 00:14:55.212 "flush": false, 00:14:55.212 "reset": true, 00:14:55.212 "compare": false, 00:14:55.212 "compare_and_write": false, 00:14:55.212 "abort": false, 00:14:55.212 "nvme_admin": false, 00:14:55.212 "nvme_io": false 00:14:55.212 }, 00:14:55.212 "driver_specific": { 00:14:55.212 "lvol": { 00:14:55.212 "lvol_store_uuid": "c0492132-f354-4c02-a2ea-2ee54431abfc", 00:14:55.212 "base_bdev": "nvme0n1", 00:14:55.212 "thin_provision": true, 00:14:55.212 "snapshot": false, 00:14:55.212 "clone": false, 00:14:55.212 "esnap_clone": false 00:14:55.212 } 00:14:55.212 } 00:14:55.212 } 00:14:55.212 ]' 00:14:55.212 15:57:06 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:55.212 15:57:06 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:55.212 15:57:06 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:55.212 15:57:06 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:55.212 15:57:06 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:55.212 15:57:06 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:55.212 15:57:06 -- ftl/common.sh@48 -- # cache_size=5171 00:14:55.212 15:57:06 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:55.472 15:57:06 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:55.472 15:57:06 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:55.472 15:57:06 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:55.472 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:55.472 15:57:06 -- ftl/fio.sh@56 -- # get_bdev_size 5e9d22a6-cb36-450c-ba54-a8fcae7f5344 00:14:55.472 15:57:06 -- common/autotest_common.sh@1367 -- # local bdev_name=5e9d22a6-cb36-450c-ba54-a8fcae7f5344 00:14:55.472 15:57:06 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:55.472 15:57:06 -- common/autotest_common.sh@1369 -- # local bs 00:14:55.472 15:57:06 -- common/autotest_common.sh@1370 -- # local nb 00:14:55.472 15:57:06 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e9d22a6-cb36-450c-ba54-a8fcae7f5344 00:14:55.731 15:57:06 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:55.731 { 00:14:55.731 "name": "5e9d22a6-cb36-450c-ba54-a8fcae7f5344", 00:14:55.731 "aliases": [ 00:14:55.731 "lvs/nvme0n1p0" 00:14:55.731 ], 00:14:55.731 "product_name": "Logical Volume", 00:14:55.731 "block_size": 4096, 00:14:55.731 "num_blocks": 26476544, 00:14:55.731 "uuid": "5e9d22a6-cb36-450c-ba54-a8fcae7f5344", 00:14:55.731 "assigned_rate_limits": { 00:14:55.731 "rw_ios_per_sec": 0, 00:14:55.731 "rw_mbytes_per_sec": 0, 00:14:55.731 "r_mbytes_per_sec": 0, 00:14:55.731 "w_mbytes_per_sec": 0 00:14:55.731 }, 00:14:55.731 "claimed": false, 00:14:55.731 "zoned": false, 00:14:55.731 "supported_io_types": { 00:14:55.731 "read": true, 00:14:55.731 "write": true, 00:14:55.731 "unmap": true, 00:14:55.731 "write_zeroes": true, 00:14:55.731 "flush": false, 00:14:55.731 "reset": true, 00:14:55.731 "compare": false, 00:14:55.731 "compare_and_write": false, 00:14:55.731 "abort": false, 00:14:55.731 "nvme_admin": false, 00:14:55.731 "nvme_io": false 00:14:55.731 }, 00:14:55.731 "driver_specific": { 00:14:55.731 "lvol": { 00:14:55.731 "lvol_store_uuid": "c0492132-f354-4c02-a2ea-2ee54431abfc", 00:14:55.731 "base_bdev": "nvme0n1", 00:14:55.731 "thin_provision": true, 00:14:55.731 "snapshot": false, 00:14:55.731 "clone": false, 00:14:55.731 "esnap_clone": false 00:14:55.731 } 00:14:55.731 } 00:14:55.731 } 00:14:55.731 ]' 00:14:55.731 15:57:06 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:55.731 15:57:06 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:55.731 15:57:06 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:55.731 15:57:06 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:55.731 15:57:06 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:55.731 15:57:06 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:55.731 15:57:06 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:55.731 15:57:06 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:55.731 15:57:06 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5e9d22a6-cb36-450c-ba54-a8fcae7f5344 -c nvc0n1p0 --l2p_dram_limit 60 00:14:55.731 [2024-11-29 15:57:07.153762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.731 [2024-11-29 15:57:07.153879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:55.731 [2024-11-29 15:57:07.153899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:14:55.731 [2024-11-29 15:57:07.153906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.731 [2024-11-29 15:57:07.153968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.731 [2024-11-29 15:57:07.153990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:55.731 [2024-11-29 15:57:07.153998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:14:55.731 [2024-11-29 15:57:07.154004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.731 [2024-11-29 15:57:07.154029] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:55.731 [2024-11-29 15:57:07.154601] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:55.731 [2024-11-29 15:57:07.154616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.731 [2024-11-29 15:57:07.154622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:55.732 [2024-11-29 15:57:07.154630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:14:55.732 [2024-11-29 15:57:07.154635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.732 [2024-11-29 15:57:07.154700] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b8b85904-402e-4205-a1d4-c321b2c99653 00:14:55.732 [2024-11-29 15:57:07.155719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.732 [2024-11-29 15:57:07.155742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:55.732 [2024-11-29 15:57:07.155750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:14:55.732 [2024-11-29 15:57:07.155757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.991 [2024-11-29 15:57:07.160921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.991 [2024-11-29 15:57:07.161038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:55.991 [2024-11-29 15:57:07.161051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.089 ms 00:14:55.991 [2024-11-29 15:57:07.161058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.991 [2024-11-29 15:57:07.161127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.991 [2024-11-29 15:57:07.161135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:55.991 [2024-11-29 15:57:07.161142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:14:55.991 [2024-11-29 15:57:07.161150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.991 [2024-11-29 15:57:07.161200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.991 [2024-11-29 15:57:07.161209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:55.991 [2024-11-29 15:57:07.161214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:14:55.991 [2024-11-29 15:57:07.161222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.991 [2024-11-29 15:57:07.161251] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:55.991 [2024-11-29 15:57:07.164233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.991 [2024-11-29 15:57:07.164316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:55.991 [2024-11-29 15:57:07.164331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.986 ms 00:14:55.991 [2024-11-29 15:57:07.164337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.991 [2024-11-29 15:57:07.164376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.991 [2024-11-29 15:57:07.164382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:55.991 [2024-11-29 15:57:07.164389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:55.991 [2024-11-29 15:57:07.164394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.991 [2024-11-29 15:57:07.164427] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:55.991 [2024-11-29 15:57:07.164513] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:55.991 [2024-11-29 15:57:07.164535] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:55.991 [2024-11-29 15:57:07.164544] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:55.991 [2024-11-29 15:57:07.164553] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:55.991 [2024-11-29 15:57:07.164559] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:55.991 [2024-11-29 15:57:07.164568] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:55.991 [2024-11-29 15:57:07.164574] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:55.991 [2024-11-29 15:57:07.164582] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:55.991 [2024-11-29 15:57:07.164588] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:55.991 [2024-11-29 15:57:07.164595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.991 [2024-11-29 15:57:07.164601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:55.991 [2024-11-29 15:57:07.164608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:14:55.991 [2024-11-29 15:57:07.164613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.991 [2024-11-29 15:57:07.164671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.991 [2024-11-29 15:57:07.164677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:55.991 [2024-11-29 15:57:07.164685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:14:55.991 [2024-11-29 15:57:07.164690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.991 [2024-11-29 15:57:07.164774] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:55.991 [2024-11-29 15:57:07.164781] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:55.991 [2024-11-29 15:57:07.164788] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:55.991 [2024-11-29 15:57:07.164794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:55.991 [2024-11-29 15:57:07.164801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:55.991 [2024-11-29 15:57:07.164807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:55.991 [2024-11-29 15:57:07.164813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:55.991 [2024-11-29 15:57:07.164818] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:55.991 [2024-11-29 15:57:07.164825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:55.991 [2024-11-29 15:57:07.164830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:55.991 [2024-11-29 15:57:07.164837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:55.991 [2024-11-29 15:57:07.164845] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:55.991 [2024-11-29 15:57:07.164852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:55.991 [2024-11-29 15:57:07.164857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:55.991 [2024-11-29 15:57:07.164863] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:55.991 [2024-11-29 15:57:07.164869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:55.991 [2024-11-29 15:57:07.164876] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:55.991 [2024-11-29 15:57:07.164881] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:55.991 [2024-11-29 15:57:07.164887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:55.991 [2024-11-29 15:57:07.164892] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:55.991 [2024-11-29 15:57:07.164899] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:55.991 [2024-11-29 15:57:07.164904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:55.991 [2024-11-29 15:57:07.164910] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:55.991 [2024-11-29 15:57:07.164915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:55.991 [2024-11-29 15:57:07.164921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:55.991 [2024-11-29 15:57:07.164926] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:55.992 [2024-11-29 15:57:07.164932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:55.992 [2024-11-29 15:57:07.164937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:55.992 [2024-11-29 15:57:07.164943] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:55.992 [2024-11-29 15:57:07.164948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:55.992 [2024-11-29 15:57:07.164954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:55.992 [2024-11-29 15:57:07.164959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:55.992 [2024-11-29 15:57:07.164966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:55.992 [2024-11-29 15:57:07.164996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:55.992 [2024-11-29 15:57:07.165004] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:55.992 [2024-11-29 15:57:07.165009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:55.992 [2024-11-29 15:57:07.165016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:55.992 [2024-11-29 15:57:07.165021] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:55.992 [2024-11-29 15:57:07.165027] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:55.992 [2024-11-29 15:57:07.165032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:55.992 [2024-11-29 15:57:07.165038] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:55.992 [2024-11-29 15:57:07.165043] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:55.992 [2024-11-29 15:57:07.165050] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:55.992 [2024-11-29 15:57:07.165057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:55.992 [2024-11-29 15:57:07.165063] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:55.992 [2024-11-29 15:57:07.165069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:55.992 [2024-11-29 15:57:07.165075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:55.992 [2024-11-29 15:57:07.165084] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:55.992 [2024-11-29 15:57:07.165092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:55.992 [2024-11-29 15:57:07.165098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:55.992 [2024-11-29 15:57:07.165105] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:55.992 [2024-11-29 15:57:07.165112] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:55.992 [2024-11-29 15:57:07.165121] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:55.992 [2024-11-29 15:57:07.165126] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:55.992 [2024-11-29 15:57:07.165133] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:55.992 [2024-11-29 15:57:07.165139] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:55.992 [2024-11-29 15:57:07.165145] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:55.992 [2024-11-29 15:57:07.165150] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:55.992 [2024-11-29 15:57:07.165162] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:55.992 [2024-11-29 15:57:07.165167] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:55.992 [2024-11-29 15:57:07.165175] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:55.992 [2024-11-29 15:57:07.165180] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:55.992 [2024-11-29 15:57:07.165187] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:55.992 [2024-11-29 15:57:07.165192] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:55.992 [2024-11-29 15:57:07.165201] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:55.992 [2024-11-29 15:57:07.165206] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:55.992 [2024-11-29 15:57:07.165213] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:55.992 [2024-11-29 15:57:07.165221] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:55.992 [2024-11-29 15:57:07.165228] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:55.992 [2024-11-29 15:57:07.165234] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:55.992 [2024-11-29 15:57:07.165240] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:55.992 [2024-11-29 15:57:07.165246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.992 [2024-11-29 15:57:07.165253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:55.992 [2024-11-29 15:57:07.165258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:14:55.992 [2024-11-29 15:57:07.165265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.992 [2024-11-29 15:57:07.177442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.992 [2024-11-29 15:57:07.177474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:55.992 [2024-11-29 15:57:07.177481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.098 ms 00:14:55.992 [2024-11-29 15:57:07.177488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.992 [2024-11-29 15:57:07.177559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.992 [2024-11-29 15:57:07.177569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:55.992 [2024-11-29 15:57:07.177575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:14:55.992 [2024-11-29 15:57:07.177582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.992 [2024-11-29 15:57:07.202995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.992 [2024-11-29 15:57:07.203022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:55.992 [2024-11-29 15:57:07.203030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.373 ms 00:14:55.992 [2024-11-29 15:57:07.203038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.992 [2024-11-29 15:57:07.203069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.992 [2024-11-29 15:57:07.203076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:55.992 [2024-11-29 15:57:07.203083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:14:55.992 [2024-11-29 15:57:07.203091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.992 [2024-11-29 15:57:07.203406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.992 [2024-11-29 15:57:07.203423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:55.992 [2024-11-29 15:57:07.203430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:14:55.992 [2024-11-29 15:57:07.203437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.992 [2024-11-29 15:57:07.203538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.992 [2024-11-29 15:57:07.203556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:55.992 [2024-11-29 15:57:07.203562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:14:55.992 [2024-11-29 15:57:07.203569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.992 [2024-11-29 15:57:07.228001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.992 [2024-11-29 15:57:07.228172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:55.992 [2024-11-29 15:57:07.228257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.406 ms 00:14:55.992 [2024-11-29 15:57:07.228295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.992 [2024-11-29 15:57:07.237713] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:55.992 [2024-11-29 15:57:07.250242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.992 [2024-11-29 15:57:07.250327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:55.992 [2024-11-29 15:57:07.250366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.777 ms 00:14:55.992 [2024-11-29 15:57:07.250384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.992 [2024-11-29 15:57:07.304998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.992 [2024-11-29 15:57:07.305095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:55.992 [2024-11-29 15:57:07.305174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.569 ms 00:14:55.992 [2024-11-29 15:57:07.305193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.992 [2024-11-29 15:57:07.305230] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:55.992 [2024-11-29 15:57:07.305288] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:58.525 [2024-11-29 15:57:09.837186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:58.525 [2024-11-29 15:57:09.837389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:58.525 [2024-11-29 15:57:09.837461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2531.946 ms 00:14:58.525 [2024-11-29 15:57:09.837486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:58.525 [2024-11-29 15:57:09.837714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:58.525 [2024-11-29 15:57:09.837793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:58.525 [2024-11-29 15:57:09.837821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:14:58.525 [2024-11-29 15:57:09.837889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:58.525 [2024-11-29 15:57:09.861194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:58.525 [2024-11-29 15:57:09.861227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:58.525 [2024-11-29 15:57:09.861242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.245 ms 00:14:58.525 [2024-11-29 15:57:09.861250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:58.525 [2024-11-29 15:57:09.883529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:58.525 [2024-11-29 15:57:09.883558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:58.525 [2024-11-29 15:57:09.883573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.228 ms 00:14:58.525 [2024-11-29 15:57:09.883580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:58.525 [2024-11-29 15:57:09.883887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:58.525 [2024-11-29 15:57:09.883903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:58.525 [2024-11-29 15:57:09.883913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:14:58.525 [2024-11-29 15:57:09.883920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:58.525 [2024-11-29 15:57:09.943919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:58.525 [2024-11-29 15:57:09.943949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:58.525 [2024-11-29 15:57:09.943962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.956 ms 00:14:58.525 [2024-11-29 15:57:09.943983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:58.783 [2024-11-29 15:57:09.967466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:58.783 [2024-11-29 15:57:09.967589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:58.783 [2024-11-29 15:57:09.967610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.437 ms 00:14:58.783 [2024-11-29 15:57:09.967618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:58.783 [2024-11-29 15:57:09.971404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:58.783 [2024-11-29 15:57:09.971432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:58.783 [2024-11-29 15:57:09.971446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.750 ms 00:14:58.783 [2024-11-29 15:57:09.971453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:58.783 [2024-11-29 15:57:09.994580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:58.783 [2024-11-29 15:57:09.994728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:58.783 [2024-11-29 15:57:09.994748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.080 ms 00:14:58.783 [2024-11-29 15:57:09.994755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:58.783 [2024-11-29 15:57:09.994800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:58.783 [2024-11-29 15:57:09.994809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:58.783 [2024-11-29 15:57:09.994819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:14:58.784 [2024-11-29 15:57:09.994826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:58.784 [2024-11-29 15:57:09.994908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:58.784 [2024-11-29 15:57:09.994917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:58.784 [2024-11-29 15:57:09.994928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:14:58.784 [2024-11-29 15:57:09.994935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:58.784 [2024-11-29 15:57:09.995837] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2841.654 ms, result 0 00:14:58.784 { 00:14:58.784 "name": "ftl0", 00:14:58.784 "uuid": "b8b85904-402e-4205-a1d4-c321b2c99653" 00:14:58.784 } 00:14:58.784 15:57:10 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:58.784 15:57:10 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:14:58.784 15:57:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:58.784 15:57:10 -- common/autotest_common.sh@899 -- # local i 00:14:58.784 15:57:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:58.784 15:57:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:58.784 15:57:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:58.784 15:57:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:59.042 [ 00:14:59.042 { 00:14:59.042 "name": "ftl0", 00:14:59.042 "aliases": [ 00:14:59.042 "b8b85904-402e-4205-a1d4-c321b2c99653" 00:14:59.042 ], 00:14:59.042 "product_name": "FTL disk", 00:14:59.042 "block_size": 4096, 00:14:59.042 "num_blocks": 20971520, 00:14:59.042 "uuid": "b8b85904-402e-4205-a1d4-c321b2c99653", 00:14:59.042 "assigned_rate_limits": { 00:14:59.042 "rw_ios_per_sec": 0, 00:14:59.042 "rw_mbytes_per_sec": 0, 00:14:59.042 "r_mbytes_per_sec": 0, 00:14:59.042 "w_mbytes_per_sec": 0 00:14:59.042 }, 00:14:59.042 "claimed": false, 00:14:59.042 "zoned": false, 00:14:59.042 "supported_io_types": { 00:14:59.042 "read": true, 00:14:59.042 "write": true, 00:14:59.042 "unmap": true, 00:14:59.042 "write_zeroes": true, 00:14:59.042 "flush": true, 00:14:59.042 "reset": false, 00:14:59.042 "compare": false, 00:14:59.042 "compare_and_write": false, 00:14:59.042 "abort": false, 00:14:59.042 "nvme_admin": false, 00:14:59.042 "nvme_io": false 00:14:59.042 }, 00:14:59.042 "driver_specific": { 00:14:59.042 "ftl": { 00:14:59.042 "base_bdev": "5e9d22a6-cb36-450c-ba54-a8fcae7f5344", 00:14:59.042 "cache": "nvc0n1p0" 00:14:59.042 } 00:14:59.042 } 00:14:59.042 } 00:14:59.042 ] 00:14:59.042 15:57:10 -- common/autotest_common.sh@905 -- # return 0 00:14:59.042 15:57:10 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:59.042 15:57:10 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:59.301 15:57:10 -- ftl/fio.sh@70 -- # echo ']}' 00:14:59.301 15:57:10 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:59.561 [2024-11-29 15:57:10.768895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.561 [2024-11-29 15:57:10.768937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:59.561 [2024-11-29 15:57:10.768949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:59.561 [2024-11-29 15:57:10.768958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.561 [2024-11-29 15:57:10.769015] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:59.561 [2024-11-29 15:57:10.771626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.561 [2024-11-29 15:57:10.771654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:59.561 [2024-11-29 15:57:10.771668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.593 ms 00:14:59.561 [2024-11-29 15:57:10.771676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.561 [2024-11-29 15:57:10.772171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.561 [2024-11-29 15:57:10.772185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:59.561 [2024-11-29 15:57:10.772195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:14:59.561 [2024-11-29 15:57:10.772202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.561 [2024-11-29 15:57:10.775639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.561 [2024-11-29 15:57:10.775659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:59.561 [2024-11-29 15:57:10.775670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.410 ms 00:14:59.561 [2024-11-29 15:57:10.775679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.561 [2024-11-29 15:57:10.781959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.561 [2024-11-29 15:57:10.782094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:14:59.561 [2024-11-29 15:57:10.782113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.244 ms 00:14:59.561 [2024-11-29 15:57:10.782120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.561 [2024-11-29 15:57:10.805247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.561 [2024-11-29 15:57:10.805353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:59.561 [2024-11-29 15:57:10.805372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.028 ms 00:14:59.561 [2024-11-29 15:57:10.805379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.561 [2024-11-29 15:57:10.820328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.561 [2024-11-29 15:57:10.820434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:59.561 [2024-11-29 15:57:10.820466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.900 ms 00:14:59.561 [2024-11-29 15:57:10.820474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.561 [2024-11-29 15:57:10.820668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.561 [2024-11-29 15:57:10.820678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:59.561 [2024-11-29 15:57:10.820690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:14:59.561 [2024-11-29 15:57:10.820697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.561 [2024-11-29 15:57:10.843565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.561 [2024-11-29 15:57:10.843666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:14:59.561 [2024-11-29 15:57:10.843683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.835 ms 00:14:59.561 [2024-11-29 15:57:10.843689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.561 [2024-11-29 15:57:10.865989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.561 [2024-11-29 15:57:10.866083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:14:59.561 [2024-11-29 15:57:10.866100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.256 ms 00:14:59.561 [2024-11-29 15:57:10.866106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.561 [2024-11-29 15:57:10.888122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.561 [2024-11-29 15:57:10.888149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:59.561 [2024-11-29 15:57:10.888161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.979 ms 00:14:59.561 [2024-11-29 15:57:10.888168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.561 [2024-11-29 15:57:10.910390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.561 [2024-11-29 15:57:10.910417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:59.561 [2024-11-29 15:57:10.910429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.120 ms 00:14:59.561 [2024-11-29 15:57:10.910435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.561 [2024-11-29 15:57:10.910479] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:59.561 [2024-11-29 15:57:10.910494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:59.561 [2024-11-29 15:57:10.910708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.910997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:59.562 [2024-11-29 15:57:10.911358] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:59.562 [2024-11-29 15:57:10.911367] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8b85904-402e-4205-a1d4-c321b2c99653 00:14:59.562 [2024-11-29 15:57:10.911374] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:59.562 [2024-11-29 15:57:10.911382] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:59.562 [2024-11-29 15:57:10.911389] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:59.562 [2024-11-29 15:57:10.911398] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:59.562 [2024-11-29 15:57:10.911405] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:59.562 [2024-11-29 15:57:10.911413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:59.562 [2024-11-29 15:57:10.911420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:59.562 [2024-11-29 15:57:10.911427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:59.562 [2024-11-29 15:57:10.911433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:59.562 [2024-11-29 15:57:10.911443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.562 [2024-11-29 15:57:10.911452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:59.562 [2024-11-29 15:57:10.911461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:14:59.562 [2024-11-29 15:57:10.911468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.562 [2024-11-29 15:57:10.923995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.562 [2024-11-29 15:57:10.924020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:59.562 [2024-11-29 15:57:10.924031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.487 ms 00:14:59.562 [2024-11-29 15:57:10.924037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.562 [2024-11-29 15:57:10.924233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.562 [2024-11-29 15:57:10.924241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:59.563 [2024-11-29 15:57:10.924250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:14:59.563 [2024-11-29 15:57:10.924257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.563 [2024-11-29 15:57:10.968288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:59.563 [2024-11-29 15:57:10.968320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:59.563 [2024-11-29 15:57:10.968332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:59.563 [2024-11-29 15:57:10.968340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.563 [2024-11-29 15:57:10.968405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:59.563 [2024-11-29 15:57:10.968413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:59.563 [2024-11-29 15:57:10.968422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:59.563 [2024-11-29 15:57:10.968428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.563 [2024-11-29 15:57:10.968510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:59.563 [2024-11-29 15:57:10.968519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:59.563 [2024-11-29 15:57:10.968529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:59.563 [2024-11-29 15:57:10.968535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.563 [2024-11-29 15:57:10.968566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:59.563 [2024-11-29 15:57:10.968576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:59.563 [2024-11-29 15:57:10.968584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:59.563 [2024-11-29 15:57:10.968591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.821 [2024-11-29 15:57:11.054287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:59.821 [2024-11-29 15:57:11.054331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:59.821 [2024-11-29 15:57:11.054343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:59.821 [2024-11-29 15:57:11.054351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.821 [2024-11-29 15:57:11.083323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:59.821 [2024-11-29 15:57:11.083355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:59.821 [2024-11-29 15:57:11.083367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:59.821 [2024-11-29 15:57:11.083375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.821 [2024-11-29 15:57:11.083444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:59.821 [2024-11-29 15:57:11.083453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:59.821 [2024-11-29 15:57:11.083463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:59.821 [2024-11-29 15:57:11.083470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.821 [2024-11-29 15:57:11.083533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:59.821 [2024-11-29 15:57:11.083542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:59.821 [2024-11-29 15:57:11.083554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:59.821 [2024-11-29 15:57:11.083561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.821 [2024-11-29 15:57:11.083665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:59.821 [2024-11-29 15:57:11.083674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:59.821 [2024-11-29 15:57:11.083684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:59.821 [2024-11-29 15:57:11.083690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.821 [2024-11-29 15:57:11.083745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:59.821 [2024-11-29 15:57:11.083753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:59.821 [2024-11-29 15:57:11.083763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:59.821 [2024-11-29 15:57:11.083771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.821 [2024-11-29 15:57:11.083816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:59.821 [2024-11-29 15:57:11.083824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:59.821 [2024-11-29 15:57:11.083834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:59.821 [2024-11-29 15:57:11.083841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.821 [2024-11-29 15:57:11.083893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:59.821 [2024-11-29 15:57:11.083902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:59.821 [2024-11-29 15:57:11.083914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:59.821 [2024-11-29 15:57:11.083920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.821 [2024-11-29 15:57:11.084095] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 315.167 ms, result 0 00:14:59.821 true 00:14:59.821 15:57:11 -- ftl/fio.sh@75 -- # killprocess 70511 00:14:59.821 15:57:11 -- common/autotest_common.sh@936 -- # '[' -z 70511 ']' 00:14:59.821 15:57:11 -- common/autotest_common.sh@940 -- # kill -0 70511 00:14:59.821 15:57:11 -- common/autotest_common.sh@941 -- # uname 00:14:59.821 15:57:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.821 15:57:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70511 00:14:59.821 15:57:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:59.821 killing process with pid 70511 00:14:59.821 15:57:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:59.821 15:57:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70511' 00:14:59.821 15:57:11 -- common/autotest_common.sh@955 -- # kill 70511 00:14:59.821 15:57:11 -- common/autotest_common.sh@960 -- # wait 70511 00:15:06.384 15:57:16 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:06.384 15:57:16 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:06.384 15:57:16 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:06.384 15:57:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:06.384 15:57:16 -- common/autotest_common.sh@10 -- # set +x 00:15:06.384 15:57:16 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:06.384 15:57:16 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:06.384 15:57:16 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:06.384 15:57:16 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:06.384 15:57:16 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:06.384 15:57:16 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:06.384 15:57:16 -- common/autotest_common.sh@1330 -- # shift 00:15:06.384 15:57:16 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:06.384 15:57:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:06.384 15:57:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:06.384 15:57:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:06.384 15:57:16 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:06.384 15:57:16 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:06.384 15:57:16 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:06.384 15:57:16 -- common/autotest_common.sh@1336 -- # break 00:15:06.384 15:57:16 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:06.384 15:57:16 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:06.384 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:06.384 fio-3.35 00:15:06.384 Starting 1 thread 00:15:10.590 00:15:10.590 test: (groupid=0, jobs=1): err= 0: pid=70703: Fri Nov 29 15:57:21 2024 00:15:10.590 read: IOPS=912, BW=60.6MiB/s (63.5MB/s)(255MiB/4200msec) 00:15:10.590 slat (nsec): min=4007, max=97483, avg=5367.12, stdev=2172.36 00:15:10.590 clat (usec): min=252, max=1491, avg=492.37, stdev=136.69 00:15:10.590 lat (usec): min=257, max=1496, avg=497.73, stdev=136.85 00:15:10.591 clat percentiles (usec): 00:15:10.591 | 1.00th=[ 285], 5.00th=[ 302], 10.00th=[ 330], 20.00th=[ 396], 00:15:10.591 | 30.00th=[ 441], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 498], 00:15:10.591 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 652], 95.00th=[ 824], 00:15:10.591 | 99.00th=[ 930], 99.50th=[ 988], 99.90th=[ 1172], 99.95th=[ 1336], 00:15:10.591 | 99.99th=[ 1500] 00:15:10.591 write: IOPS=918, BW=61.0MiB/s (64.0MB/s)(256MiB/4196msec); 0 zone resets 00:15:10.591 slat (nsec): min=14792, max=45907, avg=19419.90, stdev=3361.70 00:15:10.591 clat (usec): min=266, max=1829, avg=564.86, stdev=173.68 00:15:10.591 lat (usec): min=290, max=1850, avg=584.28, stdev=173.21 00:15:10.591 clat percentiles (usec): 00:15:10.591 | 1.00th=[ 306], 5.00th=[ 318], 10.00th=[ 363], 20.00th=[ 482], 00:15:10.591 | 30.00th=[ 506], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 553], 00:15:10.591 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 807], 95.00th=[ 914], 00:15:10.591 | 99.00th=[ 1205], 99.50th=[ 1385], 99.90th=[ 1713], 99.95th=[ 1778], 00:15:10.591 | 99.99th=[ 1827] 00:15:10.591 bw ( KiB/s): min=52632, max=78472, per=100.00%, avg=63342.00, stdev=7264.69, samples=8 00:15:10.591 iops : min= 774, max= 1154, avg=931.50, stdev=106.83, samples=8 00:15:10.591 lat (usec) : 500=44.64%, 750=45.87%, 1000=8.10% 00:15:10.591 lat (msec) : 2=1.39% 00:15:10.591 cpu : usr=99.40%, sys=0.12%, ctx=7, majf=0, minf=1318 00:15:10.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:10.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.591 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:10.591 00:15:10.591 Run status group 0 (all jobs): 00:15:10.591 READ: bw=60.6MiB/s (63.5MB/s), 60.6MiB/s-60.6MiB/s (63.5MB/s-63.5MB/s), io=255MiB (267MB), run=4200-4200msec 00:15:10.591 WRITE: bw=61.0MiB/s (64.0MB/s), 61.0MiB/s-61.0MiB/s (64.0MB/s-64.0MB/s), io=256MiB (269MB), run=4196-4196msec 00:15:11.977 ----------------------------------------------------- 00:15:11.977 Suppressions used: 00:15:11.977 count bytes template 00:15:11.977 1 5 /usr/src/fio/parse.c 00:15:11.977 1 8 libtcmalloc_minimal.so 00:15:11.977 1 904 libcrypto.so 00:15:11.977 ----------------------------------------------------- 00:15:11.977 00:15:11.977 15:57:23 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:11.977 15:57:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:11.977 15:57:23 -- common/autotest_common.sh@10 -- # set +x 00:15:11.977 15:57:23 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:11.977 15:57:23 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:11.977 15:57:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.977 15:57:23 -- common/autotest_common.sh@10 -- # set +x 00:15:11.977 15:57:23 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:11.977 15:57:23 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:11.977 15:57:23 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:11.977 15:57:23 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:11.977 15:57:23 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:11.977 15:57:23 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:11.977 15:57:23 -- common/autotest_common.sh@1330 -- # shift 00:15:11.977 15:57:23 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:11.977 15:57:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:11.977 15:57:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:11.977 15:57:23 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:11.977 15:57:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:12.238 15:57:23 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:12.238 15:57:23 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:12.238 15:57:23 -- common/autotest_common.sh@1336 -- # break 00:15:12.238 15:57:23 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:12.238 15:57:23 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:12.238 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:12.239 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:12.239 fio-3.35 00:15:12.239 Starting 2 threads 00:15:38.790 00:15:38.790 first_half: (groupid=0, jobs=1): err= 0: pid=70806: Fri Nov 29 15:57:46 2024 00:15:38.790 read: IOPS=2982, BW=11.7MiB/s (12.2MB/s)(255MiB/21873msec) 00:15:38.790 slat (nsec): min=2987, max=34629, avg=5398.41, stdev=1271.68 00:15:38.790 clat (usec): min=568, max=337962, avg=32392.02, stdev=17465.79 00:15:38.790 lat (usec): min=574, max=337967, avg=32397.41, stdev=17465.88 00:15:38.790 clat percentiles (msec): 00:15:38.790 | 1.00th=[ 8], 5.00th=[ 15], 10.00th=[ 29], 20.00th=[ 29], 00:15:38.790 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:15:38.790 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 42], 00:15:38.790 | 99.00th=[ 128], 99.50th=[ 144], 99.90th=[ 205], 99.95th=[ 284], 00:15:38.790 | 99.99th=[ 334] 00:15:38.790 write: IOPS=3513, BW=13.7MiB/s (14.4MB/s)(256MiB/18653msec); 0 zone resets 00:15:38.790 slat (usec): min=3, max=548, avg= 6.58, stdev= 3.36 00:15:38.790 clat (usec): min=324, max=72093, avg=10415.45, stdev=16189.70 00:15:38.790 lat (usec): min=336, max=72104, avg=10422.02, stdev=16189.85 00:15:38.790 clat percentiles (usec): 00:15:38.790 | 1.00th=[ 603], 5.00th=[ 701], 10.00th=[ 799], 20.00th=[ 1156], 00:15:38.790 | 30.00th=[ 2704], 40.00th=[ 4080], 50.00th=[ 5014], 60.00th=[ 5407], 00:15:38.790 | 70.00th=[ 5997], 80.00th=[10814], 90.00th=[29230], 95.00th=[57934], 00:15:38.790 | 99.00th=[65274], 99.50th=[66847], 99.90th=[69731], 99.95th=[70779], 00:15:38.790 | 99.99th=[71828] 00:15:38.790 bw ( KiB/s): min= 952, max=41496, per=77.72%, avg=21845.33, stdev=12281.96, samples=24 00:15:38.790 iops : min= 238, max=10374, avg=5461.33, stdev=3070.49, samples=24 00:15:38.790 lat (usec) : 500=0.04%, 750=3.77%, 1000=4.38% 00:15:38.790 lat (msec) : 2=4.47%, 4=7.50%, 10=21.00%, 20=5.08%, 50=47.78% 00:15:38.790 lat (msec) : 100=5.20%, 250=0.74%, 500=0.04% 00:15:38.790 cpu : usr=99.46%, sys=0.11%, ctx=35, majf=0, minf=5552 00:15:38.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:38.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.790 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:38.790 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:38.790 second_half: (groupid=0, jobs=1): err= 0: pid=70807: Fri Nov 29 15:57:46 2024 00:15:38.790 read: IOPS=3006, BW=11.7MiB/s (12.3MB/s)(254MiB/21669msec) 00:15:38.790 slat (nsec): min=2946, max=33971, avg=4271.81, stdev=1263.26 00:15:38.790 clat (usec): min=614, max=322658, avg=33218.44, stdev=16059.93 00:15:38.790 lat (usec): min=619, max=322664, avg=33222.71, stdev=16060.06 00:15:38.790 clat percentiles (msec): 00:15:38.790 | 1.00th=[ 5], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:15:38.790 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:15:38.790 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 47], 00:15:38.790 | 99.00th=[ 124], 99.50th=[ 140], 99.90th=[ 165], 99.95th=[ 243], 00:15:38.790 | 99.99th=[ 313] 00:15:38.790 write: IOPS=4266, BW=16.7MiB/s (17.5MB/s)(256MiB/15361msec); 0 zone resets 00:15:38.790 slat (usec): min=3, max=923, avg= 5.84, stdev= 4.39 00:15:38.790 clat (usec): min=345, max=71734, avg=9283.48, stdev=15731.66 00:15:38.790 lat (usec): min=350, max=71739, avg=9289.31, stdev=15731.73 00:15:38.790 clat percentiles (usec): 00:15:38.790 | 1.00th=[ 619], 5.00th=[ 734], 10.00th=[ 832], 20.00th=[ 1020], 00:15:38.790 | 30.00th=[ 1270], 40.00th=[ 2802], 50.00th=[ 4113], 60.00th=[ 4948], 00:15:38.790 | 70.00th=[ 5538], 80.00th=[10290], 90.00th=[20055], 95.00th=[57410], 00:15:38.790 | 99.00th=[64750], 99.50th=[66323], 99.90th=[69731], 99.95th=[70779], 00:15:38.790 | 99.99th=[70779] 00:15:38.790 bw ( KiB/s): min= 952, max=47336, per=100.00%, avg=30840.47, stdev=14714.17, samples=17 00:15:38.790 iops : min= 238, max=11834, avg=7710.12, stdev=3678.54, samples=17 00:15:38.790 lat (usec) : 500=0.04%, 750=2.93%, 1000=6.71% 00:15:38.790 lat (msec) : 2=8.06%, 4=7.30%, 10=15.11%, 20=6.15%, 50=47.28% 00:15:38.790 lat (msec) : 100=5.70%, 250=0.69%, 500=0.02% 00:15:38.790 cpu : usr=99.46%, sys=0.13%, ctx=40, majf=0, minf=5573 00:15:38.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:38.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.790 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:38.790 issued rwts: total=65143,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:38.790 00:15:38.790 Run status group 0 (all jobs): 00:15:38.790 READ: bw=23.3MiB/s (24.4MB/s), 11.7MiB/s-11.7MiB/s (12.2MB/s-12.3MB/s), io=509MiB (534MB), run=21669-21873msec 00:15:38.790 WRITE: bw=27.4MiB/s (28.8MB/s), 13.7MiB/s-16.7MiB/s (14.4MB/s-17.5MB/s), io=512MiB (537MB), run=15361-18653msec 00:15:38.791 ----------------------------------------------------- 00:15:38.791 Suppressions used: 00:15:38.791 count bytes template 00:15:38.791 2 10 /usr/src/fio/parse.c 00:15:38.791 3 288 /usr/src/fio/iolog.c 00:15:38.791 1 8 libtcmalloc_minimal.so 00:15:38.791 1 904 libcrypto.so 00:15:38.791 ----------------------------------------------------- 00:15:38.791 00:15:38.791 15:57:48 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:38.791 15:57:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:38.791 15:57:48 -- common/autotest_common.sh@10 -- # set +x 00:15:38.791 15:57:48 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:38.791 15:57:48 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:38.791 15:57:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:38.791 15:57:48 -- common/autotest_common.sh@10 -- # set +x 00:15:38.791 15:57:48 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:38.791 15:57:48 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:38.791 15:57:48 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:38.791 15:57:48 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:38.791 15:57:48 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:38.791 15:57:48 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:38.791 15:57:48 -- common/autotest_common.sh@1330 -- # shift 00:15:38.791 15:57:48 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:38.791 15:57:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:38.791 15:57:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:38.791 15:57:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:38.791 15:57:48 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:38.791 15:57:48 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:38.791 15:57:48 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:38.791 15:57:48 -- common/autotest_common.sh@1336 -- # break 00:15:38.791 15:57:48 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:38.791 15:57:48 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:38.791 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:38.791 fio-3.35 00:15:38.791 Starting 1 thread 00:15:56.906 00:15:56.906 test: (groupid=0, jobs=1): err= 0: pid=71097: Fri Nov 29 15:58:07 2024 00:15:56.906 read: IOPS=6568, BW=25.7MiB/s (26.9MB/s)(255MiB/9926msec) 00:15:56.906 slat (nsec): min=2992, max=33204, avg=4542.50, stdev=1005.99 00:15:56.906 clat (usec): min=1004, max=35538, avg=19477.58, stdev=2474.60 00:15:56.906 lat (usec): min=1012, max=35544, avg=19482.12, stdev=2474.59 00:15:56.906 clat percentiles (usec): 00:15:56.906 | 1.00th=[14746], 5.00th=[15926], 10.00th=[16712], 20.00th=[17695], 00:15:56.906 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19268], 60.00th=[19792], 00:15:56.906 | 70.00th=[20317], 80.00th=[20841], 90.00th=[22414], 95.00th=[23987], 00:15:56.906 | 99.00th=[27395], 99.50th=[28967], 99.90th=[31589], 99.95th=[32113], 00:15:56.906 | 99.99th=[35390] 00:15:56.906 write: IOPS=8894, BW=34.7MiB/s (36.4MB/s)(256MiB/7368msec); 0 zone resets 00:15:56.906 slat (usec): min=4, max=565, avg= 7.10, stdev= 5.59 00:15:56.906 clat (usec): min=826, max=81867, avg=14320.86, stdev=15989.08 00:15:56.906 lat (usec): min=831, max=81873, avg=14327.96, stdev=15989.18 00:15:56.906 clat percentiles (usec): 00:15:56.906 | 1.00th=[ 1287], 5.00th=[ 1549], 10.00th=[ 1696], 20.00th=[ 1975], 00:15:56.906 | 30.00th=[ 2343], 40.00th=[ 3261], 50.00th=[ 9765], 60.00th=[12518], 00:15:56.906 | 70.00th=[15795], 80.00th=[18482], 90.00th=[46924], 95.00th=[50070], 00:15:56.906 | 99.00th=[54264], 99.50th=[55313], 99.90th=[57934], 99.95th=[67634], 00:15:56.906 | 99.99th=[76022] 00:15:56.906 bw ( KiB/s): min=23568, max=48256, per=98.22%, avg=34947.60, stdev=6115.41, samples=15 00:15:56.906 iops : min= 5892, max=12064, avg=8736.87, stdev=1528.84, samples=15 00:15:56.906 lat (usec) : 1000=0.03% 00:15:56.906 lat (msec) : 2=10.44%, 4=10.13%, 10=4.90%, 20=48.23%, 50=23.74% 00:15:56.906 lat (msec) : 100=2.54% 00:15:56.906 cpu : usr=99.33%, sys=0.18%, ctx=23, majf=0, minf=5567 00:15:56.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:56.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.906 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:56.906 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:56.906 00:15:56.906 Run status group 0 (all jobs): 00:15:56.906 READ: bw=25.7MiB/s (26.9MB/s), 25.7MiB/s-25.7MiB/s (26.9MB/s-26.9MB/s), io=255MiB (267MB), run=9926-9926msec 00:15:56.906 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=256MiB (268MB), run=7368-7368msec 00:15:57.479 ----------------------------------------------------- 00:15:57.479 Suppressions used: 00:15:57.479 count bytes template 00:15:57.479 1 5 /usr/src/fio/parse.c 00:15:57.479 2 192 /usr/src/fio/iolog.c 00:15:57.479 1 8 libtcmalloc_minimal.so 00:15:57.479 1 904 libcrypto.so 00:15:57.479 ----------------------------------------------------- 00:15:57.479 00:15:57.479 15:58:08 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:57.479 15:58:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:57.479 15:58:08 -- common/autotest_common.sh@10 -- # set +x 00:15:57.479 15:58:08 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:57.479 Remove shared memory files 00:15:57.479 15:58:08 -- ftl/fio.sh@85 -- # remove_shm 00:15:57.479 15:58:08 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:57.479 15:58:08 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:57.479 15:58:08 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:57.479 15:58:08 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56149 /dev/shm/spdk_tgt_trace.pid69425 00:15:57.479 15:58:08 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:57.479 15:58:08 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:57.479 ************************************ 00:15:57.479 END TEST ftl_fio_basic 00:15:57.479 ************************************ 00:15:57.479 00:15:57.479 real 1m5.814s 00:15:57.479 user 2m21.983s 00:15:57.479 sys 0m3.011s 00:15:57.479 15:58:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:57.479 15:58:08 -- common/autotest_common.sh@10 -- # set +x 00:15:57.479 15:58:08 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:57.479 15:58:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:57.479 15:58:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:57.479 15:58:08 -- common/autotest_common.sh@10 -- # set +x 00:15:57.479 ************************************ 00:15:57.479 START TEST ftl_bdevperf 00:15:57.479 ************************************ 00:15:57.479 15:58:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:57.741 * Looking for test storage... 00:15:57.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.741 15:58:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:57.741 15:58:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:57.741 15:58:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:57.741 15:58:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:57.741 15:58:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:57.741 15:58:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:57.741 15:58:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:57.741 15:58:09 -- scripts/common.sh@335 -- # IFS=.-: 00:15:57.741 15:58:09 -- scripts/common.sh@335 -- # read -ra ver1 00:15:57.741 15:58:09 -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.741 15:58:09 -- scripts/common.sh@336 -- # read -ra ver2 00:15:57.741 15:58:09 -- scripts/common.sh@337 -- # local 'op=<' 00:15:57.741 15:58:09 -- scripts/common.sh@339 -- # ver1_l=2 00:15:57.741 15:58:09 -- scripts/common.sh@340 -- # ver2_l=1 00:15:57.741 15:58:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:57.741 15:58:09 -- scripts/common.sh@343 -- # case "$op" in 00:15:57.741 15:58:09 -- scripts/common.sh@344 -- # : 1 00:15:57.741 15:58:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:57.741 15:58:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.741 15:58:09 -- scripts/common.sh@364 -- # decimal 1 00:15:57.741 15:58:09 -- scripts/common.sh@352 -- # local d=1 00:15:57.741 15:58:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.741 15:58:09 -- scripts/common.sh@354 -- # echo 1 00:15:57.741 15:58:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:57.741 15:58:09 -- scripts/common.sh@365 -- # decimal 2 00:15:57.741 15:58:09 -- scripts/common.sh@352 -- # local d=2 00:15:57.741 15:58:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.741 15:58:09 -- scripts/common.sh@354 -- # echo 2 00:15:57.741 15:58:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:57.741 15:58:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:57.741 15:58:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:57.741 15:58:09 -- scripts/common.sh@367 -- # return 0 00:15:57.741 15:58:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.741 15:58:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:57.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.741 --rc genhtml_branch_coverage=1 00:15:57.741 --rc genhtml_function_coverage=1 00:15:57.741 --rc genhtml_legend=1 00:15:57.741 --rc geninfo_all_blocks=1 00:15:57.741 --rc geninfo_unexecuted_blocks=1 00:15:57.741 00:15:57.741 ' 00:15:57.741 15:58:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:57.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.741 --rc genhtml_branch_coverage=1 00:15:57.741 --rc genhtml_function_coverage=1 00:15:57.741 --rc genhtml_legend=1 00:15:57.741 --rc geninfo_all_blocks=1 00:15:57.741 --rc geninfo_unexecuted_blocks=1 00:15:57.741 00:15:57.741 ' 00:15:57.741 15:58:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:57.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.741 --rc genhtml_branch_coverage=1 00:15:57.741 --rc genhtml_function_coverage=1 00:15:57.741 --rc genhtml_legend=1 00:15:57.741 --rc geninfo_all_blocks=1 00:15:57.741 --rc geninfo_unexecuted_blocks=1 00:15:57.741 00:15:57.741 ' 00:15:57.741 15:58:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:57.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.741 --rc genhtml_branch_coverage=1 00:15:57.741 --rc genhtml_function_coverage=1 00:15:57.741 --rc genhtml_legend=1 00:15:57.741 --rc geninfo_all_blocks=1 00:15:57.741 --rc geninfo_unexecuted_blocks=1 00:15:57.741 00:15:57.741 ' 00:15:57.741 15:58:09 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:57.741 15:58:09 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:57.741 15:58:09 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.742 15:58:09 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.742 15:58:09 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:57.742 15:58:09 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:57.742 15:58:09 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:57.742 15:58:09 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:57.742 15:58:09 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:57.742 15:58:09 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.742 15:58:09 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.742 15:58:09 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:57.742 15:58:09 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:57.742 15:58:09 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:57.742 15:58:09 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:57.742 15:58:09 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:57.742 15:58:09 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:57.742 15:58:09 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.742 15:58:09 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.742 15:58:09 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:57.742 15:58:09 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:57.742 15:58:09 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:57.742 15:58:09 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:57.742 15:58:09 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:57.742 15:58:09 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:57.742 15:58:09 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:57.742 15:58:09 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:57.742 15:58:09 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:57.742 15:58:09 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:57.742 15:58:09 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:15:57.742 15:58:09 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:15:57.742 15:58:09 -- ftl/bdevperf.sh@13 -- # use_append= 00:15:57.742 15:58:09 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:57.742 15:58:09 -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:57.742 15:58:09 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:57.742 15:58:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:57.742 15:58:09 -- common/autotest_common.sh@10 -- # set +x 00:15:57.742 15:58:09 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=71387 00:15:57.742 15:58:09 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:57.742 15:58:09 -- ftl/bdevperf.sh@22 -- # waitforlisten 71387 00:15:57.742 15:58:09 -- common/autotest_common.sh@829 -- # '[' -z 71387 ']' 00:15:57.742 15:58:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.742 15:58:09 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:57.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.742 15:58:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.742 15:58:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.742 15:58:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.742 15:58:09 -- common/autotest_common.sh@10 -- # set +x 00:15:57.742 [2024-11-29 15:58:09.119058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:57.742 [2024-11-29 15:58:09.119403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71387 ] 00:15:58.003 [2024-11-29 15:58:09.273609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.264 [2024-11-29 15:58:09.469774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.526 15:58:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.526 15:58:09 -- common/autotest_common.sh@862 -- # return 0 00:15:58.526 15:58:09 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:58.526 15:58:09 -- ftl/common.sh@54 -- # local name=nvme0 00:15:58.526 15:58:09 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:58.526 15:58:09 -- ftl/common.sh@56 -- # local size=103424 00:15:58.526 15:58:09 -- ftl/common.sh@59 -- # local base_bdev 00:15:58.526 15:58:09 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:59.128 15:58:10 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:59.128 15:58:10 -- ftl/common.sh@62 -- # local base_size 00:15:59.128 15:58:10 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:59.128 15:58:10 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:59.128 15:58:10 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:59.128 15:58:10 -- common/autotest_common.sh@1369 -- # local bs 00:15:59.128 15:58:10 -- common/autotest_common.sh@1370 -- # local nb 00:15:59.128 15:58:10 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:59.128 15:58:10 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:59.128 { 00:15:59.128 "name": "nvme0n1", 00:15:59.128 "aliases": [ 00:15:59.128 "b09aa8c8-fadb-44f0-854e-8c1156f6545c" 00:15:59.128 ], 00:15:59.128 "product_name": "NVMe disk", 00:15:59.128 "block_size": 4096, 00:15:59.128 "num_blocks": 1310720, 00:15:59.128 "uuid": "b09aa8c8-fadb-44f0-854e-8c1156f6545c", 00:15:59.128 "assigned_rate_limits": { 00:15:59.128 "rw_ios_per_sec": 0, 00:15:59.128 "rw_mbytes_per_sec": 0, 00:15:59.128 "r_mbytes_per_sec": 0, 00:15:59.128 "w_mbytes_per_sec": 0 00:15:59.128 }, 00:15:59.128 "claimed": true, 00:15:59.128 "claim_type": "read_many_write_one", 00:15:59.128 "zoned": false, 00:15:59.128 "supported_io_types": { 00:15:59.128 "read": true, 00:15:59.128 "write": true, 00:15:59.128 "unmap": true, 00:15:59.128 "write_zeroes": true, 00:15:59.128 "flush": true, 00:15:59.128 "reset": true, 00:15:59.128 "compare": true, 00:15:59.128 "compare_and_write": false, 00:15:59.128 "abort": true, 00:15:59.128 "nvme_admin": true, 00:15:59.128 "nvme_io": true 00:15:59.128 }, 00:15:59.128 "driver_specific": { 00:15:59.128 "nvme": [ 00:15:59.128 { 00:15:59.128 "pci_address": "0000:00:07.0", 00:15:59.128 "trid": { 00:15:59.128 "trtype": "PCIe", 00:15:59.128 "traddr": "0000:00:07.0" 00:15:59.128 }, 00:15:59.128 "ctrlr_data": { 00:15:59.128 "cntlid": 0, 00:15:59.128 "vendor_id": "0x1b36", 00:15:59.128 "model_number": "QEMU NVMe Ctrl", 00:15:59.128 "serial_number": "12341", 00:15:59.128 "firmware_revision": "8.0.0", 00:15:59.128 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:59.128 "oacs": { 00:15:59.128 "security": 0, 00:15:59.128 "format": 1, 00:15:59.128 "firmware": 0, 00:15:59.128 "ns_manage": 1 00:15:59.128 }, 00:15:59.128 "multi_ctrlr": false, 00:15:59.128 "ana_reporting": false 00:15:59.128 }, 00:15:59.128 "vs": { 00:15:59.128 "nvme_version": "1.4" 00:15:59.128 }, 00:15:59.128 "ns_data": { 00:15:59.128 "id": 1, 00:15:59.128 "can_share": false 00:15:59.128 } 00:15:59.128 } 00:15:59.128 ], 00:15:59.128 "mp_policy": "active_passive" 00:15:59.128 } 00:15:59.128 } 00:15:59.128 ]' 00:15:59.128 15:58:10 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:59.128 15:58:10 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:59.128 15:58:10 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:59.128 15:58:10 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:59.128 15:58:10 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:59.128 15:58:10 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:59.128 15:58:10 -- ftl/common.sh@63 -- # base_size=5120 00:15:59.128 15:58:10 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:59.128 15:58:10 -- ftl/common.sh@67 -- # clear_lvols 00:15:59.128 15:58:10 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:59.128 15:58:10 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:59.389 15:58:10 -- ftl/common.sh@28 -- # stores=c0492132-f354-4c02-a2ea-2ee54431abfc 00:15:59.389 15:58:10 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:59.389 15:58:10 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c0492132-f354-4c02-a2ea-2ee54431abfc 00:15:59.650 15:58:10 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:59.910 15:58:11 -- ftl/common.sh@68 -- # lvs=fdbb3efc-9fd4-4307-904b-9571ac74fbb3 00:15:59.910 15:58:11 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fdbb3efc-9fd4-4307-904b-9571ac74fbb3 00:16:00.171 15:58:11 -- ftl/bdevperf.sh@23 -- # split_bdev=0fa1fb28-72ce-47a8-96a8-e946d5615842 00:16:00.171 15:58:11 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 0fa1fb28-72ce-47a8-96a8-e946d5615842 00:16:00.171 15:58:11 -- ftl/common.sh@35 -- # local name=nvc0 00:16:00.171 15:58:11 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:00.171 15:58:11 -- ftl/common.sh@37 -- # local base_bdev=0fa1fb28-72ce-47a8-96a8-e946d5615842 00:16:00.171 15:58:11 -- ftl/common.sh@38 -- # local cache_size= 00:16:00.171 15:58:11 -- ftl/common.sh@41 -- # get_bdev_size 0fa1fb28-72ce-47a8-96a8-e946d5615842 00:16:00.171 15:58:11 -- common/autotest_common.sh@1367 -- # local bdev_name=0fa1fb28-72ce-47a8-96a8-e946d5615842 00:16:00.171 15:58:11 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:00.172 15:58:11 -- common/autotest_common.sh@1369 -- # local bs 00:16:00.172 15:58:11 -- common/autotest_common.sh@1370 -- # local nb 00:16:00.172 15:58:11 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0fa1fb28-72ce-47a8-96a8-e946d5615842 00:16:00.172 15:58:11 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:00.172 { 00:16:00.172 "name": "0fa1fb28-72ce-47a8-96a8-e946d5615842", 00:16:00.172 "aliases": [ 00:16:00.172 "lvs/nvme0n1p0" 00:16:00.172 ], 00:16:00.172 "product_name": "Logical Volume", 00:16:00.172 "block_size": 4096, 00:16:00.172 "num_blocks": 26476544, 00:16:00.172 "uuid": "0fa1fb28-72ce-47a8-96a8-e946d5615842", 00:16:00.172 "assigned_rate_limits": { 00:16:00.172 "rw_ios_per_sec": 0, 00:16:00.172 "rw_mbytes_per_sec": 0, 00:16:00.172 "r_mbytes_per_sec": 0, 00:16:00.172 "w_mbytes_per_sec": 0 00:16:00.172 }, 00:16:00.172 "claimed": false, 00:16:00.172 "zoned": false, 00:16:00.172 "supported_io_types": { 00:16:00.172 "read": true, 00:16:00.172 "write": true, 00:16:00.172 "unmap": true, 00:16:00.172 "write_zeroes": true, 00:16:00.172 "flush": false, 00:16:00.172 "reset": true, 00:16:00.172 "compare": false, 00:16:00.172 "compare_and_write": false, 00:16:00.172 "abort": false, 00:16:00.172 "nvme_admin": false, 00:16:00.172 "nvme_io": false 00:16:00.172 }, 00:16:00.172 "driver_specific": { 00:16:00.172 "lvol": { 00:16:00.172 "lvol_store_uuid": "fdbb3efc-9fd4-4307-904b-9571ac74fbb3", 00:16:00.172 "base_bdev": "nvme0n1", 00:16:00.172 "thin_provision": true, 00:16:00.172 "snapshot": false, 00:16:00.172 "clone": false, 00:16:00.172 "esnap_clone": false 00:16:00.172 } 00:16:00.172 } 00:16:00.172 } 00:16:00.172 ]' 00:16:00.172 15:58:11 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:00.172 15:58:11 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:00.172 15:58:11 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:00.433 15:58:11 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:00.433 15:58:11 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:00.433 15:58:11 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:00.433 15:58:11 -- ftl/common.sh@41 -- # local base_size=5171 00:16:00.433 15:58:11 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:00.433 15:58:11 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:00.696 15:58:11 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:00.696 15:58:11 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:00.696 15:58:11 -- ftl/common.sh@48 -- # get_bdev_size 0fa1fb28-72ce-47a8-96a8-e946d5615842 00:16:00.696 15:58:11 -- common/autotest_common.sh@1367 -- # local bdev_name=0fa1fb28-72ce-47a8-96a8-e946d5615842 00:16:00.696 15:58:11 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:00.696 15:58:11 -- common/autotest_common.sh@1369 -- # local bs 00:16:00.696 15:58:11 -- common/autotest_common.sh@1370 -- # local nb 00:16:00.696 15:58:11 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0fa1fb28-72ce-47a8-96a8-e946d5615842 00:16:00.696 15:58:12 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:00.696 { 00:16:00.696 "name": "0fa1fb28-72ce-47a8-96a8-e946d5615842", 00:16:00.696 "aliases": [ 00:16:00.696 "lvs/nvme0n1p0" 00:16:00.696 ], 00:16:00.696 "product_name": "Logical Volume", 00:16:00.696 "block_size": 4096, 00:16:00.696 "num_blocks": 26476544, 00:16:00.696 "uuid": "0fa1fb28-72ce-47a8-96a8-e946d5615842", 00:16:00.696 "assigned_rate_limits": { 00:16:00.696 "rw_ios_per_sec": 0, 00:16:00.696 "rw_mbytes_per_sec": 0, 00:16:00.696 "r_mbytes_per_sec": 0, 00:16:00.696 "w_mbytes_per_sec": 0 00:16:00.696 }, 00:16:00.696 "claimed": false, 00:16:00.696 "zoned": false, 00:16:00.696 "supported_io_types": { 00:16:00.696 "read": true, 00:16:00.696 "write": true, 00:16:00.696 "unmap": true, 00:16:00.696 "write_zeroes": true, 00:16:00.696 "flush": false, 00:16:00.696 "reset": true, 00:16:00.696 "compare": false, 00:16:00.696 "compare_and_write": false, 00:16:00.696 "abort": false, 00:16:00.696 "nvme_admin": false, 00:16:00.696 "nvme_io": false 00:16:00.696 }, 00:16:00.696 "driver_specific": { 00:16:00.696 "lvol": { 00:16:00.696 "lvol_store_uuid": "fdbb3efc-9fd4-4307-904b-9571ac74fbb3", 00:16:00.696 "base_bdev": "nvme0n1", 00:16:00.696 "thin_provision": true, 00:16:00.696 "snapshot": false, 00:16:00.696 "clone": false, 00:16:00.696 "esnap_clone": false 00:16:00.696 } 00:16:00.696 } 00:16:00.696 } 00:16:00.696 ]' 00:16:00.696 15:58:12 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:00.696 15:58:12 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:00.696 15:58:12 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:00.958 15:58:12 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:00.958 15:58:12 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:00.958 15:58:12 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:00.958 15:58:12 -- ftl/common.sh@48 -- # cache_size=5171 00:16:00.958 15:58:12 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:00.958 15:58:12 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:16:00.958 15:58:12 -- ftl/bdevperf.sh@26 -- # get_bdev_size 0fa1fb28-72ce-47a8-96a8-e946d5615842 00:16:00.958 15:58:12 -- common/autotest_common.sh@1367 -- # local bdev_name=0fa1fb28-72ce-47a8-96a8-e946d5615842 00:16:00.958 15:58:12 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:00.958 15:58:12 -- common/autotest_common.sh@1369 -- # local bs 00:16:00.958 15:58:12 -- common/autotest_common.sh@1370 -- # local nb 00:16:00.958 15:58:12 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0fa1fb28-72ce-47a8-96a8-e946d5615842 00:16:01.220 15:58:12 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:01.220 { 00:16:01.220 "name": "0fa1fb28-72ce-47a8-96a8-e946d5615842", 00:16:01.220 "aliases": [ 00:16:01.220 "lvs/nvme0n1p0" 00:16:01.220 ], 00:16:01.220 "product_name": "Logical Volume", 00:16:01.220 "block_size": 4096, 00:16:01.220 "num_blocks": 26476544, 00:16:01.220 "uuid": "0fa1fb28-72ce-47a8-96a8-e946d5615842", 00:16:01.220 "assigned_rate_limits": { 00:16:01.220 "rw_ios_per_sec": 0, 00:16:01.220 "rw_mbytes_per_sec": 0, 00:16:01.220 "r_mbytes_per_sec": 0, 00:16:01.220 "w_mbytes_per_sec": 0 00:16:01.220 }, 00:16:01.220 "claimed": false, 00:16:01.220 "zoned": false, 00:16:01.220 "supported_io_types": { 00:16:01.220 "read": true, 00:16:01.220 "write": true, 00:16:01.220 "unmap": true, 00:16:01.220 "write_zeroes": true, 00:16:01.220 "flush": false, 00:16:01.220 "reset": true, 00:16:01.220 "compare": false, 00:16:01.220 "compare_and_write": false, 00:16:01.220 "abort": false, 00:16:01.220 "nvme_admin": false, 00:16:01.220 "nvme_io": false 00:16:01.220 }, 00:16:01.220 "driver_specific": { 00:16:01.220 "lvol": { 00:16:01.220 "lvol_store_uuid": "fdbb3efc-9fd4-4307-904b-9571ac74fbb3", 00:16:01.220 "base_bdev": "nvme0n1", 00:16:01.220 "thin_provision": true, 00:16:01.220 "snapshot": false, 00:16:01.220 "clone": false, 00:16:01.220 "esnap_clone": false 00:16:01.220 } 00:16:01.220 } 00:16:01.220 } 00:16:01.220 ]' 00:16:01.220 15:58:12 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:01.220 15:58:12 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:01.220 15:58:12 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:01.220 15:58:12 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:01.220 15:58:12 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:01.220 15:58:12 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:01.220 15:58:12 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:16:01.220 15:58:12 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0fa1fb28-72ce-47a8-96a8-e946d5615842 -c nvc0n1p0 --l2p_dram_limit 20 00:16:01.484 [2024-11-29 15:58:12.784959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.484 [2024-11-29 15:58:12.785038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:01.484 [2024-11-29 15:58:12.785057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:01.484 [2024-11-29 15:58:12.785066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.484 [2024-11-29 15:58:12.785135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.484 [2024-11-29 15:58:12.785146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:01.484 [2024-11-29 15:58:12.785157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:01.484 [2024-11-29 15:58:12.785166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.484 [2024-11-29 15:58:12.785187] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:01.484 [2024-11-29 15:58:12.786093] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:01.484 [2024-11-29 15:58:12.786123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.484 [2024-11-29 15:58:12.786149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:01.484 [2024-11-29 15:58:12.786162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:16:01.484 [2024-11-29 15:58:12.786170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.484 [2024-11-29 15:58:12.786204] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 29f47cb6-a4c5-45ed-a8ef-f7ec0df33b9b 00:16:01.485 [2024-11-29 15:58:12.788054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.485 [2024-11-29 15:58:12.788106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:01.485 [2024-11-29 15:58:12.788118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:01.485 [2024-11-29 15:58:12.788128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.485 [2024-11-29 15:58:12.797615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.485 [2024-11-29 15:58:12.797845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:01.485 [2024-11-29 15:58:12.797867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.431 ms 00:16:01.485 [2024-11-29 15:58:12.797877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.485 [2024-11-29 15:58:12.798152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.485 [2024-11-29 15:58:12.798182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:01.485 [2024-11-29 15:58:12.798194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:01.485 [2024-11-29 15:58:12.798210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.485 [2024-11-29 15:58:12.798274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.485 [2024-11-29 15:58:12.798287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:01.485 [2024-11-29 15:58:12.798300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:01.485 [2024-11-29 15:58:12.798309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.485 [2024-11-29 15:58:12.798334] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:01.485 [2024-11-29 15:58:12.802943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.485 [2024-11-29 15:58:12.803003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:01.485 [2024-11-29 15:58:12.803017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.615 ms 00:16:01.485 [2024-11-29 15:58:12.803025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.485 [2024-11-29 15:58:12.803066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.485 [2024-11-29 15:58:12.803074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:01.485 [2024-11-29 15:58:12.803085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:01.485 [2024-11-29 15:58:12.803093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.485 [2024-11-29 15:58:12.803127] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:01.485 [2024-11-29 15:58:12.803260] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:01.485 [2024-11-29 15:58:12.803278] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:01.485 [2024-11-29 15:58:12.803290] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:01.485 [2024-11-29 15:58:12.803303] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:01.485 [2024-11-29 15:58:12.803312] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:01.485 [2024-11-29 15:58:12.803323] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:01.485 [2024-11-29 15:58:12.803331] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:01.485 [2024-11-29 15:58:12.803345] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:01.485 [2024-11-29 15:58:12.803352] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:01.485 [2024-11-29 15:58:12.803361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.485 [2024-11-29 15:58:12.803369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:01.485 [2024-11-29 15:58:12.803380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:16:01.485 [2024-11-29 15:58:12.803388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.485 [2024-11-29 15:58:12.803451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.485 [2024-11-29 15:58:12.803459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:01.485 [2024-11-29 15:58:12.803469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:01.485 [2024-11-29 15:58:12.803475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.485 [2024-11-29 15:58:12.803549] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:01.485 [2024-11-29 15:58:12.803566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:01.485 [2024-11-29 15:58:12.803576] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:01.485 [2024-11-29 15:58:12.803591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.485 [2024-11-29 15:58:12.803601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:01.485 [2024-11-29 15:58:12.803607] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:01.485 [2024-11-29 15:58:12.803616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:01.485 [2024-11-29 15:58:12.803622] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:01.485 [2024-11-29 15:58:12.803631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:01.485 [2024-11-29 15:58:12.803637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:01.485 [2024-11-29 15:58:12.803647] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:01.485 [2024-11-29 15:58:12.803656] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:01.485 [2024-11-29 15:58:12.803665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:01.485 [2024-11-29 15:58:12.803672] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:01.485 [2024-11-29 15:58:12.803680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:01.485 [2024-11-29 15:58:12.803687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.485 [2024-11-29 15:58:12.803698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:01.485 [2024-11-29 15:58:12.803705] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:01.485 [2024-11-29 15:58:12.803713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.485 [2024-11-29 15:58:12.803721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:01.485 [2024-11-29 15:58:12.803730] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:01.485 [2024-11-29 15:58:12.803736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:01.485 [2024-11-29 15:58:12.803745] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:01.485 [2024-11-29 15:58:12.803752] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:01.485 [2024-11-29 15:58:12.803760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:01.485 [2024-11-29 15:58:12.803767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:01.485 [2024-11-29 15:58:12.803775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:01.485 [2024-11-29 15:58:12.803781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:01.485 [2024-11-29 15:58:12.803789] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:01.485 [2024-11-29 15:58:12.803796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:01.485 [2024-11-29 15:58:12.803804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:01.485 [2024-11-29 15:58:12.803810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:01.485 [2024-11-29 15:58:12.803821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:01.485 [2024-11-29 15:58:12.803828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:01.485 [2024-11-29 15:58:12.803836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:01.485 [2024-11-29 15:58:12.803843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:01.485 [2024-11-29 15:58:12.803852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:01.485 [2024-11-29 15:58:12.803859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:01.486 [2024-11-29 15:58:12.803868] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:01.486 [2024-11-29 15:58:12.803874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:01.486 [2024-11-29 15:58:12.803883] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:01.486 [2024-11-29 15:58:12.803891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:01.486 [2024-11-29 15:58:12.803900] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:01.486 [2024-11-29 15:58:12.803913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.486 [2024-11-29 15:58:12.803923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:01.486 [2024-11-29 15:58:12.803930] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:01.486 [2024-11-29 15:58:12.803938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:01.486 [2024-11-29 15:58:12.803945] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:01.486 [2024-11-29 15:58:12.803956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:01.486 [2024-11-29 15:58:12.803962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:01.486 [2024-11-29 15:58:12.803986] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:01.486 [2024-11-29 15:58:12.803997] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:01.486 [2024-11-29 15:58:12.804010] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:01.486 [2024-11-29 15:58:12.804017] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:01.486 [2024-11-29 15:58:12.804027] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:01.486 [2024-11-29 15:58:12.804034] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:01.486 [2024-11-29 15:58:12.804043] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:01.486 [2024-11-29 15:58:12.804049] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:01.486 [2024-11-29 15:58:12.804058] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:01.486 [2024-11-29 15:58:12.804065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:01.486 [2024-11-29 15:58:12.804074] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:01.486 [2024-11-29 15:58:12.804081] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:01.486 [2024-11-29 15:58:12.804092] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:01.486 [2024-11-29 15:58:12.804099] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:01.486 [2024-11-29 15:58:12.804110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:01.486 [2024-11-29 15:58:12.804117] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:01.486 [2024-11-29 15:58:12.804128] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:01.486 [2024-11-29 15:58:12.804136] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:01.486 [2024-11-29 15:58:12.804146] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:01.486 [2024-11-29 15:58:12.804152] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:01.486 [2024-11-29 15:58:12.804162] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:01.486 [2024-11-29 15:58:12.804169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.486 [2024-11-29 15:58:12.804179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:01.486 [2024-11-29 15:58:12.804187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:16:01.486 [2024-11-29 15:58:12.804197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.486 [2024-11-29 15:58:12.823238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.486 [2024-11-29 15:58:12.823421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:01.486 [2024-11-29 15:58:12.823488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.001 ms 00:16:01.486 [2024-11-29 15:58:12.823515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.486 [2024-11-29 15:58:12.823625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.486 [2024-11-29 15:58:12.823652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:01.486 [2024-11-29 15:58:12.823673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:01.486 [2024-11-29 15:58:12.823694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.486 [2024-11-29 15:58:12.867259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.486 [2024-11-29 15:58:12.867444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:01.486 [2024-11-29 15:58:12.867511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.499 ms 00:16:01.486 [2024-11-29 15:58:12.867539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.486 [2024-11-29 15:58:12.867594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.486 [2024-11-29 15:58:12.867622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:01.486 [2024-11-29 15:58:12.867642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:01.486 [2024-11-29 15:58:12.867664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.486 [2024-11-29 15:58:12.868253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.486 [2024-11-29 15:58:12.868513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:01.486 [2024-11-29 15:58:12.868598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:16:01.486 [2024-11-29 15:58:12.868624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.486 [2024-11-29 15:58:12.868893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.486 [2024-11-29 15:58:12.868946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:01.486 [2024-11-29 15:58:12.868986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:16:01.486 [2024-11-29 15:58:12.869011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.486 [2024-11-29 15:58:12.886028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.486 [2024-11-29 15:58:12.886185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:01.486 [2024-11-29 15:58:12.886248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.988 ms 00:16:01.486 [2024-11-29 15:58:12.886275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.486 [2024-11-29 15:58:12.899577] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:01.486 [2024-11-29 15:58:12.907182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.486 [2024-11-29 15:58:12.907326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:01.486 [2024-11-29 15:58:12.907349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.797 ms 00:16:01.486 [2024-11-29 15:58:12.907358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.749 [2024-11-29 15:58:13.010800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.749 [2024-11-29 15:58:13.010875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:01.749 [2024-11-29 15:58:13.010893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.404 ms 00:16:01.749 [2024-11-29 15:58:13.010901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.749 [2024-11-29 15:58:13.010960] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:01.749 [2024-11-29 15:58:13.010989] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:05.960 [2024-11-29 15:58:16.982878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-29 15:58:16.983238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:05.960 [2024-11-29 15:58:16.983275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3971.892 ms 00:16:05.960 [2024-11-29 15:58:16.983285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-29 15:58:16.983513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-29 15:58:16.983524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:05.960 [2024-11-29 15:58:16.983537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:16:05.960 [2024-11-29 15:58:16.983545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-29 15:58:17.010341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-29 15:58:17.010527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:05.960 [2024-11-29 15:58:17.010555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.731 ms 00:16:05.960 [2024-11-29 15:58:17.010567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-29 15:58:17.035825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-29 15:58:17.035874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:05.960 [2024-11-29 15:58:17.035893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.179 ms 00:16:05.960 [2024-11-29 15:58:17.035901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-29 15:58:17.036274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-29 15:58:17.036289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:05.960 [2024-11-29 15:58:17.036301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:16:05.960 [2024-11-29 15:58:17.036309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-29 15:58:17.108183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-29 15:58:17.108234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:05.960 [2024-11-29 15:58:17.108249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.830 ms 00:16:05.960 [2024-11-29 15:58:17.108256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-29 15:58:17.135965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-29 15:58:17.136024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:05.960 [2024-11-29 15:58:17.136038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.649 ms 00:16:05.960 [2024-11-29 15:58:17.136046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-29 15:58:17.137529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-29 15:58:17.137568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:05.960 [2024-11-29 15:58:17.137584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:16:05.960 [2024-11-29 15:58:17.137595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-29 15:58:17.164334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-29 15:58:17.164388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:05.960 [2024-11-29 15:58:17.164404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.690 ms 00:16:05.960 [2024-11-29 15:58:17.164411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-29 15:58:17.164466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-29 15:58:17.164476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:05.960 [2024-11-29 15:58:17.164491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:05.960 [2024-11-29 15:58:17.164498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-29 15:58:17.164605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-29 15:58:17.164617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:05.960 [2024-11-29 15:58:17.164634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:05.960 [2024-11-29 15:58:17.164642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-29 15:58:17.165831] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4380.362 ms, result 0 00:16:05.960 { 00:16:05.960 "name": "ftl0", 00:16:05.960 "uuid": "29f47cb6-a4c5-45ed-a8ef-f7ec0df33b9b" 00:16:05.960 } 00:16:05.960 15:58:17 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:05.960 15:58:17 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:16:05.960 15:58:17 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:16:06.222 15:58:17 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:06.222 [2024-11-29 15:58:17.505945] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:06.222 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:06.222 Zero copy mechanism will not be used. 00:16:06.222 Running I/O for 4 seconds... 00:16:10.428 00:16:10.428 Latency(us) 00:16:10.428 [2024-11-29T15:58:21.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.428 [2024-11-29T15:58:21.859Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:10.428 ftl0 : 4.00 1459.91 96.95 0.00 0.00 722.39 143.36 2608.84 00:16:10.428 [2024-11-29T15:58:21.859Z] =================================================================================================================== 00:16:10.428 [2024-11-29T15:58:21.859Z] Total : 1459.91 96.95 0.00 0.00 722.39 143.36 2608.84 00:16:10.428 [2024-11-29 15:58:21.514757] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:10.428 0 00:16:10.428 15:58:21 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:10.428 [2024-11-29 15:58:21.615604] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:10.428 Running I/O for 4 seconds... 00:16:14.637 00:16:14.637 Latency(us) 00:16:14.637 [2024-11-29T15:58:26.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.637 [2024-11-29T15:58:26.068Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:14.637 ftl0 : 4.03 6042.52 23.60 0.00 0.00 21116.80 266.24 189550.28 00:16:14.637 [2024-11-29T15:58:26.068Z] =================================================================================================================== 00:16:14.637 [2024-11-29T15:58:26.068Z] Total : 6042.52 23.60 0.00 0.00 21116.80 0.00 189550.28 00:16:14.637 [2024-11-29 15:58:25.649164] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:14.637 0 00:16:14.638 15:58:25 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:14.638 [2024-11-29 15:58:25.767015] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:14.638 Running I/O for 4 seconds... 00:16:18.843 00:16:18.843 Latency(us) 00:16:18.843 [2024-11-29T15:58:30.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.843 [2024-11-29T15:58:30.274Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:18.843 Verification LBA range: start 0x0 length 0x1400000 00:16:18.843 ftl0 : 4.01 10406.28 40.65 0.00 0.00 12265.96 170.14 25306.98 00:16:18.843 [2024-11-29T15:58:30.274Z] =================================================================================================================== 00:16:18.843 [2024-11-29T15:58:30.274Z] Total : 10406.28 40.65 0.00 0.00 12265.96 0.00 25306.98 00:16:18.843 [2024-11-29 15:58:29.793627] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft0 00:16:18.843 l0 00:16:18.843 15:58:29 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:18.843 [2024-11-29 15:58:29.992790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.843 [2024-11-29 15:58:29.993063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:18.843 [2024-11-29 15:58:29.993095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:18.843 [2024-11-29 15:58:29.993104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.843 [2024-11-29 15:58:29.993142] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:18.843 [2024-11-29 15:58:29.996069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.843 [2024-11-29 15:58:29.996241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:18.843 [2024-11-29 15:58:29.996263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.911 ms 00:16:18.843 [2024-11-29 15:58:29.996277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.843 [2024-11-29 15:58:29.999577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.843 [2024-11-29 15:58:29.999751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:18.843 [2024-11-29 15:58:29.999772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.268 ms 00:16:18.843 [2024-11-29 15:58:29.999784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.105 [2024-11-29 15:58:30.330022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.105 [2024-11-29 15:58:30.330256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:19.105 [2024-11-29 15:58:30.330281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 330.212 ms 00:16:19.105 [2024-11-29 15:58:30.330294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.105 [2024-11-29 15:58:30.336423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.106 [2024-11-29 15:58:30.336476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:19.106 [2024-11-29 15:58:30.336490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.092 ms 00:16:19.106 [2024-11-29 15:58:30.336500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.106 [2024-11-29 15:58:30.363743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.106 [2024-11-29 15:58:30.363802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:19.106 [2024-11-29 15:58:30.363815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.164 ms 00:16:19.106 [2024-11-29 15:58:30.363829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.106 [2024-11-29 15:58:30.382266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.106 [2024-11-29 15:58:30.382325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:19.106 [2024-11-29 15:58:30.382339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.388 ms 00:16:19.106 [2024-11-29 15:58:30.382352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.106 [2024-11-29 15:58:30.382517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.106 [2024-11-29 15:58:30.382533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:19.106 [2024-11-29 15:58:30.382543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:16:19.106 [2024-11-29 15:58:30.382553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.106 [2024-11-29 15:58:30.409106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.106 [2024-11-29 15:58:30.409159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:19.106 [2024-11-29 15:58:30.409171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.537 ms 00:16:19.106 [2024-11-29 15:58:30.409180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.106 [2024-11-29 15:58:30.435068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.106 [2024-11-29 15:58:30.435120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:19.106 [2024-11-29 15:58:30.435131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.838 ms 00:16:19.106 [2024-11-29 15:58:30.435143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.106 [2024-11-29 15:58:30.460388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.106 [2024-11-29 15:58:30.460438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:19.106 [2024-11-29 15:58:30.460449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.197 ms 00:16:19.106 [2024-11-29 15:58:30.460459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.106 [2024-11-29 15:58:30.485887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.106 [2024-11-29 15:58:30.485941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:19.106 [2024-11-29 15:58:30.485953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.213 ms 00:16:19.106 [2024-11-29 15:58:30.485963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.106 [2024-11-29 15:58:30.486047] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:19.106 [2024-11-29 15:58:30.486071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:19.106 [2024-11-29 15:58:30.486643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.486964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.487375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:19.107 [2024-11-29 15:58:30.487444] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:19.107 [2024-11-29 15:58:30.487466] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29f47cb6-a4c5-45ed-a8ef-f7ec0df33b9b 00:16:19.107 [2024-11-29 15:58:30.487501] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:19.107 [2024-11-29 15:58:30.487669] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:19.107 [2024-11-29 15:58:30.487766] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:19.107 [2024-11-29 15:58:30.487791] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:19.107 [2024-11-29 15:58:30.487816] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:19.107 [2024-11-29 15:58:30.487835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:19.107 [2024-11-29 15:58:30.487856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:19.107 [2024-11-29 15:58:30.487873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:19.107 [2024-11-29 15:58:30.487894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:19.107 [2024-11-29 15:58:30.487913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.107 [2024-11-29 15:58:30.487934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:19.107 [2024-11-29 15:58:30.488012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.867 ms 00:16:19.107 [2024-11-29 15:58:30.488039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.107 [2024-11-29 15:58:30.502048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.107 [2024-11-29 15:58:30.502211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:19.107 [2024-11-29 15:58:30.502279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.949 ms 00:16:19.107 [2024-11-29 15:58:30.502307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.107 [2024-11-29 15:58:30.502573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.107 [2024-11-29 15:58:30.502663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:19.107 [2024-11-29 15:58:30.502677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:16:19.107 [2024-11-29 15:58:30.502687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.369 [2024-11-29 15:58:30.544957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.369 [2024-11-29 15:58:30.545022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:19.369 [2024-11-29 15:58:30.545034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.369 [2024-11-29 15:58:30.545045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.369 [2024-11-29 15:58:30.545124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.369 [2024-11-29 15:58:30.545135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:19.369 [2024-11-29 15:58:30.545144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.369 [2024-11-29 15:58:30.545154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.369 [2024-11-29 15:58:30.545234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.369 [2024-11-29 15:58:30.545247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:19.369 [2024-11-29 15:58:30.545259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.369 [2024-11-29 15:58:30.545271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.369 [2024-11-29 15:58:30.545288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.369 [2024-11-29 15:58:30.545298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:19.369 [2024-11-29 15:58:30.545305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.369 [2024-11-29 15:58:30.545314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.369 [2024-11-29 15:58:30.626875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.369 [2024-11-29 15:58:30.626935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:19.369 [2024-11-29 15:58:30.626951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.369 [2024-11-29 15:58:30.626962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.369 [2024-11-29 15:58:30.658649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.369 [2024-11-29 15:58:30.658705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:19.369 [2024-11-29 15:58:30.658717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.369 [2024-11-29 15:58:30.658727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.369 [2024-11-29 15:58:30.658798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.369 [2024-11-29 15:58:30.658810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:19.369 [2024-11-29 15:58:30.658818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.369 [2024-11-29 15:58:30.658835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.369 [2024-11-29 15:58:30.658877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.369 [2024-11-29 15:58:30.658889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:19.370 [2024-11-29 15:58:30.658898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.370 [2024-11-29 15:58:30.658908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.370 [2024-11-29 15:58:30.659039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.370 [2024-11-29 15:58:30.659052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:19.370 [2024-11-29 15:58:30.659061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.370 [2024-11-29 15:58:30.659071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.370 [2024-11-29 15:58:30.659106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.370 [2024-11-29 15:58:30.659118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:19.370 [2024-11-29 15:58:30.659126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.370 [2024-11-29 15:58:30.659136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.370 [2024-11-29 15:58:30.659179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.370 [2024-11-29 15:58:30.659190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:19.370 [2024-11-29 15:58:30.659198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.370 [2024-11-29 15:58:30.659210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.370 [2024-11-29 15:58:30.659261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.370 [2024-11-29 15:58:30.659273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:19.370 [2024-11-29 15:58:30.659281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.370 [2024-11-29 15:58:30.659291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.370 [2024-11-29 15:58:30.659441] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 666.601 ms, result 0 00:16:19.370 true 00:16:19.370 15:58:30 -- ftl/bdevperf.sh@37 -- # killprocess 71387 00:16:19.370 15:58:30 -- common/autotest_common.sh@936 -- # '[' -z 71387 ']' 00:16:19.370 15:58:30 -- common/autotest_common.sh@940 -- # kill -0 71387 00:16:19.370 15:58:30 -- common/autotest_common.sh@941 -- # uname 00:16:19.370 15:58:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:19.370 15:58:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71387 00:16:19.370 killing process with pid 71387 00:16:19.370 Received shutdown signal, test time was about 4.000000 seconds 00:16:19.370 00:16:19.370 Latency(us) 00:16:19.370 [2024-11-29T15:58:30.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.370 [2024-11-29T15:58:30.801Z] =================================================================================================================== 00:16:19.370 [2024-11-29T15:58:30.801Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:19.370 15:58:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:19.370 15:58:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:19.370 15:58:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71387' 00:16:19.370 15:58:30 -- common/autotest_common.sh@955 -- # kill 71387 00:16:19.370 15:58:30 -- common/autotest_common.sh@960 -- # wait 71387 00:16:20.314 15:58:31 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:16:20.314 15:58:31 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:20.314 15:58:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:20.314 15:58:31 -- common/autotest_common.sh@10 -- # set +x 00:16:20.314 Remove shared memory files 00:16:20.314 15:58:31 -- ftl/bdevperf.sh@41 -- # remove_shm 00:16:20.314 15:58:31 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:20.314 15:58:31 -- ftl/common.sh@205 -- # rm -f rm -f 00:16:20.314 15:58:31 -- ftl/common.sh@206 -- # rm -f rm -f 00:16:20.314 15:58:31 -- ftl/common.sh@207 -- # rm -f rm -f 00:16:20.314 15:58:31 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:20.314 15:58:31 -- ftl/common.sh@209 -- # rm -f rm -f 00:16:20.314 ************************************ 00:16:20.314 END TEST ftl_bdevperf 00:16:20.314 ************************************ 00:16:20.314 00:16:20.314 real 0m22.776s 00:16:20.314 user 0m25.231s 00:16:20.314 sys 0m0.998s 00:16:20.314 15:58:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:20.314 15:58:31 -- common/autotest_common.sh@10 -- # set +x 00:16:20.314 15:58:31 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:20.314 15:58:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:20.314 15:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:20.314 15:58:31 -- common/autotest_common.sh@10 -- # set +x 00:16:20.314 ************************************ 00:16:20.314 START TEST ftl_trim 00:16:20.314 ************************************ 00:16:20.314 15:58:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:20.576 * Looking for test storage... 00:16:20.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.576 15:58:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:20.576 15:58:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:20.576 15:58:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:20.576 15:58:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:20.576 15:58:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:20.576 15:58:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:20.576 15:58:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:20.576 15:58:31 -- scripts/common.sh@335 -- # IFS=.-: 00:16:20.576 15:58:31 -- scripts/common.sh@335 -- # read -ra ver1 00:16:20.576 15:58:31 -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.576 15:58:31 -- scripts/common.sh@336 -- # read -ra ver2 00:16:20.576 15:58:31 -- scripts/common.sh@337 -- # local 'op=<' 00:16:20.576 15:58:31 -- scripts/common.sh@339 -- # ver1_l=2 00:16:20.576 15:58:31 -- scripts/common.sh@340 -- # ver2_l=1 00:16:20.576 15:58:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:20.576 15:58:31 -- scripts/common.sh@343 -- # case "$op" in 00:16:20.576 15:58:31 -- scripts/common.sh@344 -- # : 1 00:16:20.576 15:58:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:20.576 15:58:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.576 15:58:31 -- scripts/common.sh@364 -- # decimal 1 00:16:20.576 15:58:31 -- scripts/common.sh@352 -- # local d=1 00:16:20.576 15:58:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.576 15:58:31 -- scripts/common.sh@354 -- # echo 1 00:16:20.576 15:58:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:20.576 15:58:31 -- scripts/common.sh@365 -- # decimal 2 00:16:20.576 15:58:31 -- scripts/common.sh@352 -- # local d=2 00:16:20.576 15:58:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.576 15:58:31 -- scripts/common.sh@354 -- # echo 2 00:16:20.576 15:58:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:20.576 15:58:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:20.576 15:58:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:20.576 15:58:31 -- scripts/common.sh@367 -- # return 0 00:16:20.576 15:58:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.576 15:58:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:20.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.576 --rc genhtml_branch_coverage=1 00:16:20.576 --rc genhtml_function_coverage=1 00:16:20.576 --rc genhtml_legend=1 00:16:20.576 --rc geninfo_all_blocks=1 00:16:20.576 --rc geninfo_unexecuted_blocks=1 00:16:20.576 00:16:20.576 ' 00:16:20.576 15:58:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:20.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.576 --rc genhtml_branch_coverage=1 00:16:20.576 --rc genhtml_function_coverage=1 00:16:20.576 --rc genhtml_legend=1 00:16:20.576 --rc geninfo_all_blocks=1 00:16:20.576 --rc geninfo_unexecuted_blocks=1 00:16:20.576 00:16:20.576 ' 00:16:20.576 15:58:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:20.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.576 --rc genhtml_branch_coverage=1 00:16:20.576 --rc genhtml_function_coverage=1 00:16:20.576 --rc genhtml_legend=1 00:16:20.576 --rc geninfo_all_blocks=1 00:16:20.576 --rc geninfo_unexecuted_blocks=1 00:16:20.576 00:16:20.576 ' 00:16:20.576 15:58:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:20.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.576 --rc genhtml_branch_coverage=1 00:16:20.576 --rc genhtml_function_coverage=1 00:16:20.576 --rc genhtml_legend=1 00:16:20.576 --rc geninfo_all_blocks=1 00:16:20.576 --rc geninfo_unexecuted_blocks=1 00:16:20.576 00:16:20.576 ' 00:16:20.576 15:58:31 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:20.576 15:58:31 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:20.576 15:58:31 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.576 15:58:31 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.576 15:58:31 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:20.576 15:58:31 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:20.576 15:58:31 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.576 15:58:31 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:20.576 15:58:31 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:20.576 15:58:31 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.576 15:58:31 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.576 15:58:31 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:20.576 15:58:31 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:20.576 15:58:31 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:20.576 15:58:31 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:20.576 15:58:31 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:20.576 15:58:31 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:20.576 15:58:31 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.576 15:58:31 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.576 15:58:31 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:20.576 15:58:31 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:20.576 15:58:31 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:20.576 15:58:31 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:20.576 15:58:31 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:20.576 15:58:31 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:20.576 15:58:31 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:20.576 15:58:31 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:20.576 15:58:31 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:20.576 15:58:31 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:20.576 15:58:31 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.576 15:58:31 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:16:20.576 15:58:31 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:16:20.576 15:58:31 -- ftl/trim.sh@25 -- # timeout=240 00:16:20.576 15:58:31 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:20.576 15:58:31 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:20.576 15:58:31 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:20.576 15:58:31 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:20.576 15:58:31 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:20.576 15:58:31 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:20.576 15:58:31 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:20.576 15:58:31 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:20.576 15:58:31 -- ftl/trim.sh@40 -- # svcpid=71758 00:16:20.576 15:58:31 -- ftl/trim.sh@41 -- # waitforlisten 71758 00:16:20.576 15:58:31 -- common/autotest_common.sh@829 -- # '[' -z 71758 ']' 00:16:20.576 15:58:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.576 15:58:31 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:20.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.576 15:58:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.576 15:58:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.576 15:58:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.576 15:58:31 -- common/autotest_common.sh@10 -- # set +x 00:16:20.576 [2024-11-29 15:58:31.986158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:20.576 [2024-11-29 15:58:31.986305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71758 ] 00:16:20.837 [2024-11-29 15:58:32.143044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:21.099 [2024-11-29 15:58:32.365707] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:21.099 [2024-11-29 15:58:32.366200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.099 [2024-11-29 15:58:32.366557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.099 [2024-11-29 15:58:32.366640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.487 15:58:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.487 15:58:33 -- common/autotest_common.sh@862 -- # return 0 00:16:22.487 15:58:33 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:22.487 15:58:33 -- ftl/common.sh@54 -- # local name=nvme0 00:16:22.487 15:58:33 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:22.487 15:58:33 -- ftl/common.sh@56 -- # local size=103424 00:16:22.487 15:58:33 -- ftl/common.sh@59 -- # local base_bdev 00:16:22.487 15:58:33 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:22.487 15:58:33 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:22.487 15:58:33 -- ftl/common.sh@62 -- # local base_size 00:16:22.487 15:58:33 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:22.487 15:58:33 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:22.487 15:58:33 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:22.487 15:58:33 -- common/autotest_common.sh@1369 -- # local bs 00:16:22.487 15:58:33 -- common/autotest_common.sh@1370 -- # local nb 00:16:22.487 15:58:33 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:22.748 15:58:33 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:22.748 { 00:16:22.748 "name": "nvme0n1", 00:16:22.748 "aliases": [ 00:16:22.748 "5b622c62-8116-4ac5-a5f9-7748bcddac12" 00:16:22.748 ], 00:16:22.748 "product_name": "NVMe disk", 00:16:22.748 "block_size": 4096, 00:16:22.748 "num_blocks": 1310720, 00:16:22.748 "uuid": "5b622c62-8116-4ac5-a5f9-7748bcddac12", 00:16:22.748 "assigned_rate_limits": { 00:16:22.748 "rw_ios_per_sec": 0, 00:16:22.748 "rw_mbytes_per_sec": 0, 00:16:22.748 "r_mbytes_per_sec": 0, 00:16:22.748 "w_mbytes_per_sec": 0 00:16:22.748 }, 00:16:22.748 "claimed": true, 00:16:22.748 "claim_type": "read_many_write_one", 00:16:22.748 "zoned": false, 00:16:22.749 "supported_io_types": { 00:16:22.749 "read": true, 00:16:22.749 "write": true, 00:16:22.749 "unmap": true, 00:16:22.749 "write_zeroes": true, 00:16:22.749 "flush": true, 00:16:22.749 "reset": true, 00:16:22.749 "compare": true, 00:16:22.749 "compare_and_write": false, 00:16:22.749 "abort": true, 00:16:22.749 "nvme_admin": true, 00:16:22.749 "nvme_io": true 00:16:22.749 }, 00:16:22.749 "driver_specific": { 00:16:22.749 "nvme": [ 00:16:22.749 { 00:16:22.749 "pci_address": "0000:00:07.0", 00:16:22.749 "trid": { 00:16:22.749 "trtype": "PCIe", 00:16:22.749 "traddr": "0000:00:07.0" 00:16:22.749 }, 00:16:22.749 "ctrlr_data": { 00:16:22.749 "cntlid": 0, 00:16:22.749 "vendor_id": "0x1b36", 00:16:22.749 "model_number": "QEMU NVMe Ctrl", 00:16:22.749 "serial_number": "12341", 00:16:22.749 "firmware_revision": "8.0.0", 00:16:22.749 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:22.749 "oacs": { 00:16:22.749 "security": 0, 00:16:22.749 "format": 1, 00:16:22.749 "firmware": 0, 00:16:22.749 "ns_manage": 1 00:16:22.749 }, 00:16:22.749 "multi_ctrlr": false, 00:16:22.749 "ana_reporting": false 00:16:22.749 }, 00:16:22.749 "vs": { 00:16:22.749 "nvme_version": "1.4" 00:16:22.749 }, 00:16:22.749 "ns_data": { 00:16:22.749 "id": 1, 00:16:22.749 "can_share": false 00:16:22.749 } 00:16:22.749 } 00:16:22.749 ], 00:16:22.749 "mp_policy": "active_passive" 00:16:22.749 } 00:16:22.749 } 00:16:22.749 ]' 00:16:22.749 15:58:33 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:22.749 15:58:33 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:22.749 15:58:33 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:22.749 15:58:34 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:22.749 15:58:34 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:22.749 15:58:34 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:22.749 15:58:34 -- ftl/common.sh@63 -- # base_size=5120 00:16:22.749 15:58:34 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:22.749 15:58:34 -- ftl/common.sh@67 -- # clear_lvols 00:16:22.749 15:58:34 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:22.749 15:58:34 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:23.009 15:58:34 -- ftl/common.sh@28 -- # stores=fdbb3efc-9fd4-4307-904b-9571ac74fbb3 00:16:23.009 15:58:34 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:23.009 15:58:34 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fdbb3efc-9fd4-4307-904b-9571ac74fbb3 00:16:23.269 15:58:34 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:23.269 15:58:34 -- ftl/common.sh@68 -- # lvs=9ead3772-c5d6-4609-ba9f-c70920c68bfb 00:16:23.269 15:58:34 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9ead3772-c5d6-4609-ba9f-c70920c68bfb 00:16:23.531 15:58:34 -- ftl/trim.sh@43 -- # split_bdev=6f83e187-1fa8-4ccf-8396-658db7f071ac 00:16:23.531 15:58:34 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 6f83e187-1fa8-4ccf-8396-658db7f071ac 00:16:23.531 15:58:34 -- ftl/common.sh@35 -- # local name=nvc0 00:16:23.531 15:58:34 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:23.531 15:58:34 -- ftl/common.sh@37 -- # local base_bdev=6f83e187-1fa8-4ccf-8396-658db7f071ac 00:16:23.531 15:58:34 -- ftl/common.sh@38 -- # local cache_size= 00:16:23.531 15:58:34 -- ftl/common.sh@41 -- # get_bdev_size 6f83e187-1fa8-4ccf-8396-658db7f071ac 00:16:23.531 15:58:34 -- common/autotest_common.sh@1367 -- # local bdev_name=6f83e187-1fa8-4ccf-8396-658db7f071ac 00:16:23.531 15:58:34 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:23.531 15:58:34 -- common/autotest_common.sh@1369 -- # local bs 00:16:23.531 15:58:34 -- common/autotest_common.sh@1370 -- # local nb 00:16:23.531 15:58:34 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6f83e187-1fa8-4ccf-8396-658db7f071ac 00:16:23.792 15:58:35 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:23.792 { 00:16:23.792 "name": "6f83e187-1fa8-4ccf-8396-658db7f071ac", 00:16:23.792 "aliases": [ 00:16:23.792 "lvs/nvme0n1p0" 00:16:23.792 ], 00:16:23.792 "product_name": "Logical Volume", 00:16:23.792 "block_size": 4096, 00:16:23.792 "num_blocks": 26476544, 00:16:23.792 "uuid": "6f83e187-1fa8-4ccf-8396-658db7f071ac", 00:16:23.792 "assigned_rate_limits": { 00:16:23.792 "rw_ios_per_sec": 0, 00:16:23.792 "rw_mbytes_per_sec": 0, 00:16:23.792 "r_mbytes_per_sec": 0, 00:16:23.792 "w_mbytes_per_sec": 0 00:16:23.792 }, 00:16:23.792 "claimed": false, 00:16:23.792 "zoned": false, 00:16:23.792 "supported_io_types": { 00:16:23.792 "read": true, 00:16:23.792 "write": true, 00:16:23.792 "unmap": true, 00:16:23.792 "write_zeroes": true, 00:16:23.792 "flush": false, 00:16:23.792 "reset": true, 00:16:23.792 "compare": false, 00:16:23.792 "compare_and_write": false, 00:16:23.792 "abort": false, 00:16:23.792 "nvme_admin": false, 00:16:23.792 "nvme_io": false 00:16:23.792 }, 00:16:23.792 "driver_specific": { 00:16:23.792 "lvol": { 00:16:23.792 "lvol_store_uuid": "9ead3772-c5d6-4609-ba9f-c70920c68bfb", 00:16:23.792 "base_bdev": "nvme0n1", 00:16:23.792 "thin_provision": true, 00:16:23.792 "snapshot": false, 00:16:23.793 "clone": false, 00:16:23.793 "esnap_clone": false 00:16:23.793 } 00:16:23.793 } 00:16:23.793 } 00:16:23.793 ]' 00:16:23.793 15:58:35 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:23.793 15:58:35 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:23.793 15:58:35 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:23.793 15:58:35 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:23.793 15:58:35 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:23.793 15:58:35 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:23.793 15:58:35 -- ftl/common.sh@41 -- # local base_size=5171 00:16:23.793 15:58:35 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:23.793 15:58:35 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:24.052 15:58:35 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:24.052 15:58:35 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:24.052 15:58:35 -- ftl/common.sh@48 -- # get_bdev_size 6f83e187-1fa8-4ccf-8396-658db7f071ac 00:16:24.052 15:58:35 -- common/autotest_common.sh@1367 -- # local bdev_name=6f83e187-1fa8-4ccf-8396-658db7f071ac 00:16:24.052 15:58:35 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:24.052 15:58:35 -- common/autotest_common.sh@1369 -- # local bs 00:16:24.052 15:58:35 -- common/autotest_common.sh@1370 -- # local nb 00:16:24.052 15:58:35 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6f83e187-1fa8-4ccf-8396-658db7f071ac 00:16:24.310 15:58:35 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:24.310 { 00:16:24.310 "name": "6f83e187-1fa8-4ccf-8396-658db7f071ac", 00:16:24.310 "aliases": [ 00:16:24.310 "lvs/nvme0n1p0" 00:16:24.310 ], 00:16:24.310 "product_name": "Logical Volume", 00:16:24.310 "block_size": 4096, 00:16:24.310 "num_blocks": 26476544, 00:16:24.310 "uuid": "6f83e187-1fa8-4ccf-8396-658db7f071ac", 00:16:24.310 "assigned_rate_limits": { 00:16:24.310 "rw_ios_per_sec": 0, 00:16:24.310 "rw_mbytes_per_sec": 0, 00:16:24.310 "r_mbytes_per_sec": 0, 00:16:24.310 "w_mbytes_per_sec": 0 00:16:24.310 }, 00:16:24.310 "claimed": false, 00:16:24.310 "zoned": false, 00:16:24.310 "supported_io_types": { 00:16:24.310 "read": true, 00:16:24.310 "write": true, 00:16:24.310 "unmap": true, 00:16:24.310 "write_zeroes": true, 00:16:24.310 "flush": false, 00:16:24.310 "reset": true, 00:16:24.310 "compare": false, 00:16:24.310 "compare_and_write": false, 00:16:24.310 "abort": false, 00:16:24.310 "nvme_admin": false, 00:16:24.310 "nvme_io": false 00:16:24.310 }, 00:16:24.310 "driver_specific": { 00:16:24.310 "lvol": { 00:16:24.310 "lvol_store_uuid": "9ead3772-c5d6-4609-ba9f-c70920c68bfb", 00:16:24.310 "base_bdev": "nvme0n1", 00:16:24.310 "thin_provision": true, 00:16:24.310 "snapshot": false, 00:16:24.310 "clone": false, 00:16:24.310 "esnap_clone": false 00:16:24.310 } 00:16:24.310 } 00:16:24.310 } 00:16:24.310 ]' 00:16:24.310 15:58:35 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:24.310 15:58:35 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:24.310 15:58:35 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:24.310 15:58:35 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:24.310 15:58:35 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:24.310 15:58:35 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:24.310 15:58:35 -- ftl/common.sh@48 -- # cache_size=5171 00:16:24.310 15:58:35 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:24.569 15:58:35 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:24.569 15:58:35 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:24.569 15:58:35 -- ftl/trim.sh@47 -- # get_bdev_size 6f83e187-1fa8-4ccf-8396-658db7f071ac 00:16:24.569 15:58:35 -- common/autotest_common.sh@1367 -- # local bdev_name=6f83e187-1fa8-4ccf-8396-658db7f071ac 00:16:24.569 15:58:35 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:24.569 15:58:35 -- common/autotest_common.sh@1369 -- # local bs 00:16:24.569 15:58:35 -- common/autotest_common.sh@1370 -- # local nb 00:16:24.569 15:58:35 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6f83e187-1fa8-4ccf-8396-658db7f071ac 00:16:24.828 15:58:36 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:24.828 { 00:16:24.828 "name": "6f83e187-1fa8-4ccf-8396-658db7f071ac", 00:16:24.828 "aliases": [ 00:16:24.828 "lvs/nvme0n1p0" 00:16:24.828 ], 00:16:24.828 "product_name": "Logical Volume", 00:16:24.828 "block_size": 4096, 00:16:24.828 "num_blocks": 26476544, 00:16:24.828 "uuid": "6f83e187-1fa8-4ccf-8396-658db7f071ac", 00:16:24.828 "assigned_rate_limits": { 00:16:24.828 "rw_ios_per_sec": 0, 00:16:24.828 "rw_mbytes_per_sec": 0, 00:16:24.828 "r_mbytes_per_sec": 0, 00:16:24.828 "w_mbytes_per_sec": 0 00:16:24.828 }, 00:16:24.828 "claimed": false, 00:16:24.828 "zoned": false, 00:16:24.828 "supported_io_types": { 00:16:24.828 "read": true, 00:16:24.828 "write": true, 00:16:24.828 "unmap": true, 00:16:24.828 "write_zeroes": true, 00:16:24.828 "flush": false, 00:16:24.828 "reset": true, 00:16:24.828 "compare": false, 00:16:24.828 "compare_and_write": false, 00:16:24.828 "abort": false, 00:16:24.828 "nvme_admin": false, 00:16:24.828 "nvme_io": false 00:16:24.828 }, 00:16:24.828 "driver_specific": { 00:16:24.828 "lvol": { 00:16:24.828 "lvol_store_uuid": "9ead3772-c5d6-4609-ba9f-c70920c68bfb", 00:16:24.828 "base_bdev": "nvme0n1", 00:16:24.828 "thin_provision": true, 00:16:24.828 "snapshot": false, 00:16:24.828 "clone": false, 00:16:24.828 "esnap_clone": false 00:16:24.828 } 00:16:24.828 } 00:16:24.828 } 00:16:24.828 ]' 00:16:24.828 15:58:36 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:24.828 15:58:36 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:24.828 15:58:36 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:24.828 15:58:36 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:24.828 15:58:36 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:24.829 15:58:36 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:24.829 15:58:36 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:24.829 15:58:36 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6f83e187-1fa8-4ccf-8396-658db7f071ac -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:24.829 [2024-11-29 15:58:36.230517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.829 [2024-11-29 15:58:36.230554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:24.829 [2024-11-29 15:58:36.230567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:24.829 [2024-11-29 15:58:36.230574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.829 [2024-11-29 15:58:36.232748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.829 [2024-11-29 15:58:36.232778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:24.829 [2024-11-29 15:58:36.232787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.144 ms 00:16:24.829 [2024-11-29 15:58:36.232794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.829 [2024-11-29 15:58:36.232872] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:24.829 [2024-11-29 15:58:36.233440] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:24.829 [2024-11-29 15:58:36.233466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.829 [2024-11-29 15:58:36.233473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:24.829 [2024-11-29 15:58:36.233480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:16:24.829 [2024-11-29 15:58:36.233486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.829 [2024-11-29 15:58:36.233765] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 08798bdb-4e1e-4c21-9160-7bdd4bf640f7 00:16:24.829 [2024-11-29 15:58:36.234809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.829 [2024-11-29 15:58:36.234844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:24.829 [2024-11-29 15:58:36.234853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:24.829 [2024-11-29 15:58:36.234861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.829 [2024-11-29 15:58:36.240101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.829 [2024-11-29 15:58:36.240128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:24.829 [2024-11-29 15:58:36.240136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.170 ms 00:16:24.829 [2024-11-29 15:58:36.240143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.829 [2024-11-29 15:58:36.240255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.829 [2024-11-29 15:58:36.240271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:24.829 [2024-11-29 15:58:36.240278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:24.829 [2024-11-29 15:58:36.240287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.829 [2024-11-29 15:58:36.240330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.829 [2024-11-29 15:58:36.240342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:24.829 [2024-11-29 15:58:36.240348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:24.829 [2024-11-29 15:58:36.240355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.829 [2024-11-29 15:58:36.240393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:24.829 [2024-11-29 15:58:36.243397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.829 [2024-11-29 15:58:36.243422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:24.829 [2024-11-29 15:58:36.243432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.009 ms 00:16:24.829 [2024-11-29 15:58:36.243438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.829 [2024-11-29 15:58:36.243490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.829 [2024-11-29 15:58:36.243497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:24.829 [2024-11-29 15:58:36.243515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:24.829 [2024-11-29 15:58:36.243520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.829 [2024-11-29 15:58:36.243551] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:24.829 [2024-11-29 15:58:36.243633] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:24.829 [2024-11-29 15:58:36.243648] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:24.829 [2024-11-29 15:58:36.243656] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:24.829 [2024-11-29 15:58:36.243665] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:24.829 [2024-11-29 15:58:36.243672] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:24.829 [2024-11-29 15:58:36.243680] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:24.829 [2024-11-29 15:58:36.243686] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:24.829 [2024-11-29 15:58:36.243693] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:24.829 [2024-11-29 15:58:36.243699] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:24.829 [2024-11-29 15:58:36.243707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.829 [2024-11-29 15:58:36.243712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:24.829 [2024-11-29 15:58:36.243719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:16:24.829 [2024-11-29 15:58:36.243725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.829 [2024-11-29 15:58:36.243793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.829 [2024-11-29 15:58:36.243799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:24.829 [2024-11-29 15:58:36.243808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:24.829 [2024-11-29 15:58:36.243813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.829 [2024-11-29 15:58:36.243899] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:24.829 [2024-11-29 15:58:36.243912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:24.829 [2024-11-29 15:58:36.243920] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.829 [2024-11-29 15:58:36.243926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.829 [2024-11-29 15:58:36.243933] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:24.829 [2024-11-29 15:58:36.243938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:24.829 [2024-11-29 15:58:36.243945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:24.829 [2024-11-29 15:58:36.243950] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:24.829 [2024-11-29 15:58:36.243957] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:24.829 [2024-11-29 15:58:36.243962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.829 [2024-11-29 15:58:36.243968] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:24.829 [2024-11-29 15:58:36.243983] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:24.829 [2024-11-29 15:58:36.243989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.829 [2024-11-29 15:58:36.243995] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:24.829 [2024-11-29 15:58:36.244002] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:24.829 [2024-11-29 15:58:36.244007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.829 [2024-11-29 15:58:36.244014] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:24.829 [2024-11-29 15:58:36.244019] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:24.829 [2024-11-29 15:58:36.244025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.829 [2024-11-29 15:58:36.244030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:24.829 [2024-11-29 15:58:36.244036] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:24.830 [2024-11-29 15:58:36.244041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:24.830 [2024-11-29 15:58:36.244047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:24.830 [2024-11-29 15:58:36.244052] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:24.830 [2024-11-29 15:58:36.244059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.830 [2024-11-29 15:58:36.244064] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:24.830 [2024-11-29 15:58:36.244070] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:24.830 [2024-11-29 15:58:36.244075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.830 [2024-11-29 15:58:36.244081] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:24.830 [2024-11-29 15:58:36.244086] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:24.830 [2024-11-29 15:58:36.244092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.830 [2024-11-29 15:58:36.244097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:24.830 [2024-11-29 15:58:36.244104] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:24.830 [2024-11-29 15:58:36.244109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.830 [2024-11-29 15:58:36.244115] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:24.830 [2024-11-29 15:58:36.244121] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:24.830 [2024-11-29 15:58:36.244127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.830 [2024-11-29 15:58:36.244132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:24.830 [2024-11-29 15:58:36.244138] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:24.830 [2024-11-29 15:58:36.244143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.830 [2024-11-29 15:58:36.244150] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:24.830 [2024-11-29 15:58:36.244155] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:24.830 [2024-11-29 15:58:36.244162] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.830 [2024-11-29 15:58:36.244167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.830 [2024-11-29 15:58:36.244176] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:24.830 [2024-11-29 15:58:36.244181] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:24.830 [2024-11-29 15:58:36.244187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:24.830 [2024-11-29 15:58:36.244192] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:24.830 [2024-11-29 15:58:36.244200] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:24.830 [2024-11-29 15:58:36.244205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:24.830 [2024-11-29 15:58:36.244212] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:24.830 [2024-11-29 15:58:36.244219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.830 [2024-11-29 15:58:36.244227] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:24.830 [2024-11-29 15:58:36.244232] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:24.830 [2024-11-29 15:58:36.244238] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:24.830 [2024-11-29 15:58:36.244244] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:24.830 [2024-11-29 15:58:36.244253] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:24.830 [2024-11-29 15:58:36.244258] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:24.830 [2024-11-29 15:58:36.244265] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:24.830 [2024-11-29 15:58:36.244270] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:24.830 [2024-11-29 15:58:36.244277] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:24.830 [2024-11-29 15:58:36.244282] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:24.830 [2024-11-29 15:58:36.244289] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:24.830 [2024-11-29 15:58:36.244295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:24.830 [2024-11-29 15:58:36.244304] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:24.830 [2024-11-29 15:58:36.244310] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:24.830 [2024-11-29 15:58:36.244317] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.830 [2024-11-29 15:58:36.244323] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:24.830 [2024-11-29 15:58:36.244329] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:24.830 [2024-11-29 15:58:36.244335] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:24.830 [2024-11-29 15:58:36.244341] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:24.830 [2024-11-29 15:58:36.244347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.830 [2024-11-29 15:58:36.244353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:24.830 [2024-11-29 15:58:36.244359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:16:24.830 [2024-11-29 15:58:36.244365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.830 [2024-11-29 15:58:36.256739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.830 [2024-11-29 15:58:36.256772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:24.830 [2024-11-29 15:58:36.256780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.282 ms 00:16:24.830 [2024-11-29 15:58:36.256788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.830 [2024-11-29 15:58:36.256883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.830 [2024-11-29 15:58:36.256901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:24.830 [2024-11-29 15:58:36.256909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:24.830 [2024-11-29 15:58:36.256916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.089 [2024-11-29 15:58:36.282512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.089 [2024-11-29 15:58:36.282548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:25.089 [2024-11-29 15:58:36.282557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.569 ms 00:16:25.089 [2024-11-29 15:58:36.282564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.089 [2024-11-29 15:58:36.282619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.089 [2024-11-29 15:58:36.282629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:25.089 [2024-11-29 15:58:36.282636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:25.089 [2024-11-29 15:58:36.282647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.089 [2024-11-29 15:58:36.282952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.089 [2024-11-29 15:58:36.282983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:25.089 [2024-11-29 15:58:36.282991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:16:25.089 [2024-11-29 15:58:36.282998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.089 [2024-11-29 15:58:36.283095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.089 [2024-11-29 15:58:36.283110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:25.089 [2024-11-29 15:58:36.283116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:25.089 [2024-11-29 15:58:36.283123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.089 [2024-11-29 15:58:36.310762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.089 [2024-11-29 15:58:36.310798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:25.089 [2024-11-29 15:58:36.310807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.606 ms 00:16:25.089 [2024-11-29 15:58:36.310814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.089 [2024-11-29 15:58:36.319968] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:25.089 [2024-11-29 15:58:36.332829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.089 [2024-11-29 15:58:36.332858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:25.089 [2024-11-29 15:58:36.332869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.914 ms 00:16:25.089 [2024-11-29 15:58:36.332875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.089 [2024-11-29 15:58:36.408477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.089 [2024-11-29 15:58:36.408516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:25.089 [2024-11-29 15:58:36.408529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.543 ms 00:16:25.089 [2024-11-29 15:58:36.408537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.089 [2024-11-29 15:58:36.408594] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:25.089 [2024-11-29 15:58:36.408603] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:27.621 [2024-11-29 15:58:38.886629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.621 [2024-11-29 15:58:38.886685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:27.621 [2024-11-29 15:58:38.886701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2478.020 ms 00:16:27.621 [2024-11-29 15:58:38.886710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.621 [2024-11-29 15:58:38.886937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.621 [2024-11-29 15:58:38.886953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:27.621 [2024-11-29 15:58:38.886964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:16:27.621 [2024-11-29 15:58:38.886998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.621 [2024-11-29 15:58:38.909894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.621 [2024-11-29 15:58:38.909928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:27.621 [2024-11-29 15:58:38.909941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.854 ms 00:16:27.621 [2024-11-29 15:58:38.909949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.621 [2024-11-29 15:58:38.932472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.621 [2024-11-29 15:58:38.932503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:27.621 [2024-11-29 15:58:38.932518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.449 ms 00:16:27.621 [2024-11-29 15:58:38.932525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.621 [2024-11-29 15:58:38.932862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.621 [2024-11-29 15:58:38.932884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:27.621 [2024-11-29 15:58:38.932894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:16:27.621 [2024-11-29 15:58:38.932903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.621 [2024-11-29 15:58:38.995087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.621 [2024-11-29 15:58:38.995119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:27.621 [2024-11-29 15:58:38.995131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.142 ms 00:16:27.621 [2024-11-29 15:58:38.995139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.621 [2024-11-29 15:58:39.019411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.621 [2024-11-29 15:58:39.019450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:27.621 [2024-11-29 15:58:39.019462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.198 ms 00:16:27.621 [2024-11-29 15:58:39.019470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.621 [2024-11-29 15:58:39.023349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.621 [2024-11-29 15:58:39.023382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:27.621 [2024-11-29 15:58:39.023394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.822 ms 00:16:27.621 [2024-11-29 15:58:39.023402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.621 [2024-11-29 15:58:39.046411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.621 [2024-11-29 15:58:39.046442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:27.621 [2024-11-29 15:58:39.046453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.954 ms 00:16:27.621 [2024-11-29 15:58:39.046460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.622 [2024-11-29 15:58:39.046520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.622 [2024-11-29 15:58:39.046530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:27.622 [2024-11-29 15:58:39.046540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:27.622 [2024-11-29 15:58:39.046547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.622 [2024-11-29 15:58:39.046639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.622 [2024-11-29 15:58:39.046659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:27.622 [2024-11-29 15:58:39.046668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:27.622 [2024-11-29 15:58:39.046675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.622 [2024-11-29 15:58:39.047433] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:27.622 [2024-11-29 15:58:39.050509] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2816.657 ms, result 0 00:16:27.880 [2024-11-29 15:58:39.051295] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:27.880 { 00:16:27.880 "name": "ftl0", 00:16:27.880 "uuid": "08798bdb-4e1e-4c21-9160-7bdd4bf640f7" 00:16:27.880 } 00:16:27.880 15:58:39 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:27.880 15:58:39 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:16:27.880 15:58:39 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:27.880 15:58:39 -- common/autotest_common.sh@899 -- # local i 00:16:27.880 15:58:39 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:27.880 15:58:39 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:27.880 15:58:39 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:27.880 15:58:39 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:28.139 [ 00:16:28.139 { 00:16:28.139 "name": "ftl0", 00:16:28.139 "aliases": [ 00:16:28.139 "08798bdb-4e1e-4c21-9160-7bdd4bf640f7" 00:16:28.139 ], 00:16:28.139 "product_name": "FTL disk", 00:16:28.139 "block_size": 4096, 00:16:28.139 "num_blocks": 23592960, 00:16:28.139 "uuid": "08798bdb-4e1e-4c21-9160-7bdd4bf640f7", 00:16:28.139 "assigned_rate_limits": { 00:16:28.139 "rw_ios_per_sec": 0, 00:16:28.139 "rw_mbytes_per_sec": 0, 00:16:28.139 "r_mbytes_per_sec": 0, 00:16:28.139 "w_mbytes_per_sec": 0 00:16:28.139 }, 00:16:28.139 "claimed": false, 00:16:28.139 "zoned": false, 00:16:28.139 "supported_io_types": { 00:16:28.139 "read": true, 00:16:28.139 "write": true, 00:16:28.139 "unmap": true, 00:16:28.139 "write_zeroes": true, 00:16:28.139 "flush": true, 00:16:28.139 "reset": false, 00:16:28.139 "compare": false, 00:16:28.139 "compare_and_write": false, 00:16:28.139 "abort": false, 00:16:28.139 "nvme_admin": false, 00:16:28.139 "nvme_io": false 00:16:28.139 }, 00:16:28.139 "driver_specific": { 00:16:28.139 "ftl": { 00:16:28.139 "base_bdev": "6f83e187-1fa8-4ccf-8396-658db7f071ac", 00:16:28.139 "cache": "nvc0n1p0" 00:16:28.139 } 00:16:28.139 } 00:16:28.139 } 00:16:28.139 ] 00:16:28.139 15:58:39 -- common/autotest_common.sh@905 -- # return 0 00:16:28.139 15:58:39 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:28.139 15:58:39 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:28.398 15:58:39 -- ftl/trim.sh@56 -- # echo ']}' 00:16:28.398 15:58:39 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:28.398 15:58:39 -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:28.398 { 00:16:28.398 "name": "ftl0", 00:16:28.398 "aliases": [ 00:16:28.398 "08798bdb-4e1e-4c21-9160-7bdd4bf640f7" 00:16:28.398 ], 00:16:28.398 "product_name": "FTL disk", 00:16:28.398 "block_size": 4096, 00:16:28.398 "num_blocks": 23592960, 00:16:28.398 "uuid": "08798bdb-4e1e-4c21-9160-7bdd4bf640f7", 00:16:28.398 "assigned_rate_limits": { 00:16:28.398 "rw_ios_per_sec": 0, 00:16:28.398 "rw_mbytes_per_sec": 0, 00:16:28.398 "r_mbytes_per_sec": 0, 00:16:28.398 "w_mbytes_per_sec": 0 00:16:28.398 }, 00:16:28.398 "claimed": false, 00:16:28.398 "zoned": false, 00:16:28.398 "supported_io_types": { 00:16:28.398 "read": true, 00:16:28.398 "write": true, 00:16:28.398 "unmap": true, 00:16:28.398 "write_zeroes": true, 00:16:28.398 "flush": true, 00:16:28.398 "reset": false, 00:16:28.398 "compare": false, 00:16:28.398 "compare_and_write": false, 00:16:28.398 "abort": false, 00:16:28.398 "nvme_admin": false, 00:16:28.398 "nvme_io": false 00:16:28.398 }, 00:16:28.398 "driver_specific": { 00:16:28.398 "ftl": { 00:16:28.398 "base_bdev": "6f83e187-1fa8-4ccf-8396-658db7f071ac", 00:16:28.398 "cache": "nvc0n1p0" 00:16:28.398 } 00:16:28.398 } 00:16:28.398 } 00:16:28.398 ]' 00:16:28.398 15:58:39 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:28.398 15:58:39 -- ftl/trim.sh@60 -- # nb=23592960 00:16:28.398 15:58:39 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:28.657 [2024-11-29 15:58:39.946942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.657 [2024-11-29 15:58:39.946995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:28.657 [2024-11-29 15:58:39.947007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:28.657 [2024-11-29 15:58:39.947017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.657 [2024-11-29 15:58:39.947055] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:28.657 [2024-11-29 15:58:39.949623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.657 [2024-11-29 15:58:39.949651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:28.657 [2024-11-29 15:58:39.949665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.551 ms 00:16:28.657 [2024-11-29 15:58:39.949674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.657 [2024-11-29 15:58:39.950298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.657 [2024-11-29 15:58:39.950319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:28.657 [2024-11-29 15:58:39.950332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:16:28.657 [2024-11-29 15:58:39.950340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.657 [2024-11-29 15:58:39.953992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.657 [2024-11-29 15:58:39.954020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:28.657 [2024-11-29 15:58:39.954034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.624 ms 00:16:28.657 [2024-11-29 15:58:39.954041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.657 [2024-11-29 15:58:39.960952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.657 [2024-11-29 15:58:39.960988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:28.657 [2024-11-29 15:58:39.960999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.846 ms 00:16:28.657 [2024-11-29 15:58:39.961007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.657 [2024-11-29 15:58:39.984650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.657 [2024-11-29 15:58:39.984681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:28.657 [2024-11-29 15:58:39.984694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.557 ms 00:16:28.657 [2024-11-29 15:58:39.984700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.657 [2024-11-29 15:58:39.999835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.657 [2024-11-29 15:58:39.999867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:28.657 [2024-11-29 15:58:39.999881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.080 ms 00:16:28.657 [2024-11-29 15:58:39.999889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.657 [2024-11-29 15:58:40.000113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.657 [2024-11-29 15:58:40.000124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:28.657 [2024-11-29 15:58:40.000139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:16:28.657 [2024-11-29 15:58:40.000146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.657 [2024-11-29 15:58:40.022874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.658 [2024-11-29 15:58:40.022905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:28.658 [2024-11-29 15:58:40.022917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.692 ms 00:16:28.658 [2024-11-29 15:58:40.022923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.658 [2024-11-29 15:58:40.045288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.658 [2024-11-29 15:58:40.045318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:28.658 [2024-11-29 15:58:40.045330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.295 ms 00:16:28.658 [2024-11-29 15:58:40.045337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.658 [2024-11-29 15:58:40.067266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.658 [2024-11-29 15:58:40.067297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:28.658 [2024-11-29 15:58:40.067308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.873 ms 00:16:28.658 [2024-11-29 15:58:40.067315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.918 [2024-11-29 15:58:40.089984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.918 [2024-11-29 15:58:40.090012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:28.918 [2024-11-29 15:58:40.090025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.561 ms 00:16:28.918 [2024-11-29 15:58:40.090033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.918 [2024-11-29 15:58:40.090095] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:28.918 [2024-11-29 15:58:40.090109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:28.918 [2024-11-29 15:58:40.090120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:28.918 [2024-11-29 15:58:40.090128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:28.918 [2024-11-29 15:58:40.090137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:28.919 [2024-11-29 15:58:40.090881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:28.920 [2024-11-29 15:58:40.090891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:28.920 [2024-11-29 15:58:40.090899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:28.920 [2024-11-29 15:58:40.090908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:28.920 [2024-11-29 15:58:40.090915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:28.920 [2024-11-29 15:58:40.090924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:28.920 [2024-11-29 15:58:40.090931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:28.920 [2024-11-29 15:58:40.090941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:28.920 [2024-11-29 15:58:40.090956] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:28.920 [2024-11-29 15:58:40.090965] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 08798bdb-4e1e-4c21-9160-7bdd4bf640f7 00:16:28.920 [2024-11-29 15:58:40.090984] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:28.920 [2024-11-29 15:58:40.090992] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:28.920 [2024-11-29 15:58:40.090999] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:28.920 [2024-11-29 15:58:40.091008] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:28.920 [2024-11-29 15:58:40.091015] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:28.920 [2024-11-29 15:58:40.091024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:28.920 [2024-11-29 15:58:40.091031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:28.920 [2024-11-29 15:58:40.091041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:28.920 [2024-11-29 15:58:40.091047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:28.920 [2024-11-29 15:58:40.091055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.920 [2024-11-29 15:58:40.091065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:28.920 [2024-11-29 15:58:40.091074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:16:28.920 [2024-11-29 15:58:40.091081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.103424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.920 [2024-11-29 15:58:40.103451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:28.920 [2024-11-29 15:58:40.103462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.296 ms 00:16:28.920 [2024-11-29 15:58:40.103469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.103692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.920 [2024-11-29 15:58:40.103702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:28.920 [2024-11-29 15:58:40.103711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:16:28.920 [2024-11-29 15:58:40.103718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.147796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.920 [2024-11-29 15:58:40.147831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:28.920 [2024-11-29 15:58:40.147845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.920 [2024-11-29 15:58:40.147852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.147951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.920 [2024-11-29 15:58:40.147960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:28.920 [2024-11-29 15:58:40.147969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.920 [2024-11-29 15:58:40.147986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.148054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.920 [2024-11-29 15:58:40.148063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:28.920 [2024-11-29 15:58:40.148072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.920 [2024-11-29 15:58:40.148079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.148114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.920 [2024-11-29 15:58:40.148124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:28.920 [2024-11-29 15:58:40.148132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.920 [2024-11-29 15:58:40.148139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.232411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.920 [2024-11-29 15:58:40.232452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:28.920 [2024-11-29 15:58:40.232466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.920 [2024-11-29 15:58:40.232474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.260928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.920 [2024-11-29 15:58:40.260959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:28.920 [2024-11-29 15:58:40.260986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.920 [2024-11-29 15:58:40.260994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.261065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.920 [2024-11-29 15:58:40.261074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:28.920 [2024-11-29 15:58:40.261084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.920 [2024-11-29 15:58:40.261091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.261159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.920 [2024-11-29 15:58:40.261167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:28.920 [2024-11-29 15:58:40.261178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.920 [2024-11-29 15:58:40.261197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.261300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.920 [2024-11-29 15:58:40.261313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:28.920 [2024-11-29 15:58:40.261324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.920 [2024-11-29 15:58:40.261332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.261381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.920 [2024-11-29 15:58:40.261389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:28.920 [2024-11-29 15:58:40.261400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.920 [2024-11-29 15:58:40.261407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.261459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.920 [2024-11-29 15:58:40.261473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:28.920 [2024-11-29 15:58:40.261483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.920 [2024-11-29 15:58:40.261490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.261546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.920 [2024-11-29 15:58:40.261561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:28.920 [2024-11-29 15:58:40.261573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.920 [2024-11-29 15:58:40.261580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.920 [2024-11-29 15:58:40.261782] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 314.814 ms, result 0 00:16:28.920 true 00:16:28.920 15:58:40 -- ftl/trim.sh@63 -- # killprocess 71758 00:16:28.920 15:58:40 -- common/autotest_common.sh@936 -- # '[' -z 71758 ']' 00:16:28.920 15:58:40 -- common/autotest_common.sh@940 -- # kill -0 71758 00:16:28.920 15:58:40 -- common/autotest_common.sh@941 -- # uname 00:16:28.920 15:58:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:28.920 15:58:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71758 00:16:28.920 15:58:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:28.920 15:58:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:28.920 15:58:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71758' 00:16:28.920 killing process with pid 71758 00:16:28.920 15:58:40 -- common/autotest_common.sh@955 -- # kill 71758 00:16:28.920 15:58:40 -- common/autotest_common.sh@960 -- # wait 71758 00:16:35.481 15:58:46 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:36.052 65536+0 records in 00:16:36.052 65536+0 records out 00:16:36.052 268435456 bytes (268 MB, 256 MiB) copied, 1.00633 s, 267 MB/s 00:16:36.052 15:58:47 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:36.052 [2024-11-29 15:58:47.468753] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:36.052 [2024-11-29 15:58:47.468869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71988 ] 00:16:36.333 [2024-11-29 15:58:47.621152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.611 [2024-11-29 15:58:47.838808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.873 [2024-11-29 15:58:48.090705] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:36.873 [2024-11-29 15:58:48.090765] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:36.873 [2024-11-29 15:58:48.242625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.873 [2024-11-29 15:58:48.242691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:36.873 [2024-11-29 15:58:48.242707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:36.873 [2024-11-29 15:58:48.242716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.873 [2024-11-29 15:58:48.245605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.873 [2024-11-29 15:58:48.245660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:36.873 [2024-11-29 15:58:48.245671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.868 ms 00:16:36.873 [2024-11-29 15:58:48.245679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.873 [2024-11-29 15:58:48.245806] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:36.873 [2024-11-29 15:58:48.246621] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:36.873 [2024-11-29 15:58:48.246660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.873 [2024-11-29 15:58:48.246669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:36.873 [2024-11-29 15:58:48.246679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:16:36.873 [2024-11-29 15:58:48.246687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.873 [2024-11-29 15:58:48.248487] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:36.873 [2024-11-29 15:58:48.262172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.873 [2024-11-29 15:58:48.262220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:36.873 [2024-11-29 15:58:48.262233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.686 ms 00:16:36.873 [2024-11-29 15:58:48.262242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.873 [2024-11-29 15:58:48.262353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.873 [2024-11-29 15:58:48.262365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:36.873 [2024-11-29 15:58:48.262375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:36.873 [2024-11-29 15:58:48.262383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.873 [2024-11-29 15:58:48.270427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.873 [2024-11-29 15:58:48.270473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:36.873 [2024-11-29 15:58:48.270484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.993 ms 00:16:36.873 [2024-11-29 15:58:48.270497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.873 [2024-11-29 15:58:48.270612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.873 [2024-11-29 15:58:48.270624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:36.873 [2024-11-29 15:58:48.270634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:36.873 [2024-11-29 15:58:48.270646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.873 [2024-11-29 15:58:48.270673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.873 [2024-11-29 15:58:48.270682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:36.873 [2024-11-29 15:58:48.270691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:36.873 [2024-11-29 15:58:48.270698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.873 [2024-11-29 15:58:48.270730] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:36.873 [2024-11-29 15:58:48.274947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.873 [2024-11-29 15:58:48.275001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:36.873 [2024-11-29 15:58:48.275012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.233 ms 00:16:36.873 [2024-11-29 15:58:48.275024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.873 [2024-11-29 15:58:48.275098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.873 [2024-11-29 15:58:48.275109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:36.873 [2024-11-29 15:58:48.275118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:36.873 [2024-11-29 15:58:48.275126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.873 [2024-11-29 15:58:48.275146] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:36.873 [2024-11-29 15:58:48.275167] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:36.873 [2024-11-29 15:58:48.275202] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:36.873 [2024-11-29 15:58:48.275221] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:36.873 [2024-11-29 15:58:48.275297] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:36.873 [2024-11-29 15:58:48.275317] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:36.873 [2024-11-29 15:58:48.275329] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:36.873 [2024-11-29 15:58:48.275341] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:36.873 [2024-11-29 15:58:48.275352] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:36.873 [2024-11-29 15:58:48.275362] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:36.873 [2024-11-29 15:58:48.275370] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:36.873 [2024-11-29 15:58:48.275378] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:36.873 [2024-11-29 15:58:48.275393] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:36.873 [2024-11-29 15:58:48.275401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.873 [2024-11-29 15:58:48.275410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:36.873 [2024-11-29 15:58:48.275418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:16:36.873 [2024-11-29 15:58:48.275426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.873 [2024-11-29 15:58:48.275492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.873 [2024-11-29 15:58:48.275501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:36.873 [2024-11-29 15:58:48.275509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:36.873 [2024-11-29 15:58:48.275517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.873 [2024-11-29 15:58:48.275595] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:36.873 [2024-11-29 15:58:48.275615] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:36.873 [2024-11-29 15:58:48.275624] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:36.873 [2024-11-29 15:58:48.275632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.873 [2024-11-29 15:58:48.275640] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:36.873 [2024-11-29 15:58:48.275648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:36.873 [2024-11-29 15:58:48.275657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:36.873 [2024-11-29 15:58:48.275664] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:36.873 [2024-11-29 15:58:48.275672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:36.873 [2024-11-29 15:58:48.275679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:36.873 [2024-11-29 15:58:48.275686] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:36.873 [2024-11-29 15:58:48.275692] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:36.873 [2024-11-29 15:58:48.275699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:36.873 [2024-11-29 15:58:48.275710] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:36.873 [2024-11-29 15:58:48.275726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:36.874 [2024-11-29 15:58:48.275733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.874 [2024-11-29 15:58:48.275739] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:36.874 [2024-11-29 15:58:48.275746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:36.874 [2024-11-29 15:58:48.275752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.874 [2024-11-29 15:58:48.275759] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:36.874 [2024-11-29 15:58:48.275766] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:36.874 [2024-11-29 15:58:48.275773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:36.874 [2024-11-29 15:58:48.275779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:36.874 [2024-11-29 15:58:48.275787] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:36.874 [2024-11-29 15:58:48.275793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:36.874 [2024-11-29 15:58:48.275799] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:36.874 [2024-11-29 15:58:48.275806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:36.874 [2024-11-29 15:58:48.275817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:36.874 [2024-11-29 15:58:48.275823] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:36.874 [2024-11-29 15:58:48.275830] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:36.874 [2024-11-29 15:58:48.275837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:36.874 [2024-11-29 15:58:48.275844] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:36.874 [2024-11-29 15:58:48.275851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:36.874 [2024-11-29 15:58:48.275857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:36.874 [2024-11-29 15:58:48.275864] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:36.874 [2024-11-29 15:58:48.275871] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:36.874 [2024-11-29 15:58:48.275878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:36.874 [2024-11-29 15:58:48.275884] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:36.874 [2024-11-29 15:58:48.275891] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:36.874 [2024-11-29 15:58:48.275897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:36.874 [2024-11-29 15:58:48.275905] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:36.874 [2024-11-29 15:58:48.275913] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:36.874 [2024-11-29 15:58:48.275921] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:36.874 [2024-11-29 15:58:48.275932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.874 [2024-11-29 15:58:48.275940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:36.874 [2024-11-29 15:58:48.275948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:36.874 [2024-11-29 15:58:48.275955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:36.874 [2024-11-29 15:58:48.275964] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:36.874 [2024-11-29 15:58:48.275987] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:36.874 [2024-11-29 15:58:48.275994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:36.874 [2024-11-29 15:58:48.276002] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:36.874 [2024-11-29 15:58:48.276013] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:36.874 [2024-11-29 15:58:48.276022] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:36.874 [2024-11-29 15:58:48.276031] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:36.874 [2024-11-29 15:58:48.276039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:36.874 [2024-11-29 15:58:48.276048] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:36.874 [2024-11-29 15:58:48.276056] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:36.874 [2024-11-29 15:58:48.276064] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:36.874 [2024-11-29 15:58:48.276071] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:36.874 [2024-11-29 15:58:48.276080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:36.874 [2024-11-29 15:58:48.276088] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:36.874 [2024-11-29 15:58:48.276097] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:36.874 [2024-11-29 15:58:48.276104] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:36.874 [2024-11-29 15:58:48.276112] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:36.874 [2024-11-29 15:58:48.276119] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:36.874 [2024-11-29 15:58:48.276127] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:36.874 [2024-11-29 15:58:48.276140] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:36.874 [2024-11-29 15:58:48.276148] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:36.874 [2024-11-29 15:58:48.276156] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:36.874 [2024-11-29 15:58:48.276163] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:36.874 [2024-11-29 15:58:48.276169] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:36.874 [2024-11-29 15:58:48.276177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.874 [2024-11-29 15:58:48.276185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:36.874 [2024-11-29 15:58:48.276195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:16:36.874 [2024-11-29 15:58:48.276202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.874 [2024-11-29 15:58:48.294390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.874 [2024-11-29 15:58:48.294436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:36.874 [2024-11-29 15:58:48.294449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.144 ms 00:16:36.874 [2024-11-29 15:58:48.294458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.874 [2024-11-29 15:58:48.294586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.874 [2024-11-29 15:58:48.294597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:36.874 [2024-11-29 15:58:48.294607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:36.874 [2024-11-29 15:58:48.294614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.339073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.339111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:37.137 [2024-11-29 15:58:48.339123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.436 ms 00:16:37.137 [2024-11-29 15:58:48.339131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.339199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.339209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:37.137 [2024-11-29 15:58:48.339221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:37.137 [2024-11-29 15:58:48.339229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.339564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.339587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:37.137 [2024-11-29 15:58:48.339596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:16:37.137 [2024-11-29 15:58:48.339604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.339722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.339732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:37.137 [2024-11-29 15:58:48.339740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:16:37.137 [2024-11-29 15:58:48.339747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.354057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.354087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:37.137 [2024-11-29 15:58:48.354096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.287 ms 00:16:37.137 [2024-11-29 15:58:48.354106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.366892] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:37.137 [2024-11-29 15:58:48.366932] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:37.137 [2024-11-29 15:58:48.366943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.366950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:37.137 [2024-11-29 15:58:48.366959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.743 ms 00:16:37.137 [2024-11-29 15:58:48.366965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.391600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.391636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:37.137 [2024-11-29 15:58:48.391651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.560 ms 00:16:37.137 [2024-11-29 15:58:48.391658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.403877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.403910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:37.137 [2024-11-29 15:58:48.403919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.165 ms 00:16:37.137 [2024-11-29 15:58:48.403933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.416025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.416058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:37.137 [2024-11-29 15:58:48.416068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.019 ms 00:16:37.137 [2024-11-29 15:58:48.416075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.416439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.416451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:37.137 [2024-11-29 15:58:48.416459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:16:37.137 [2024-11-29 15:58:48.416466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.477873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.477928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:37.137 [2024-11-29 15:58:48.477943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.386 ms 00:16:37.137 [2024-11-29 15:58:48.477952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.489171] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:37.137 [2024-11-29 15:58:48.503184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.503220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:37.137 [2024-11-29 15:58:48.503231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.112 ms 00:16:37.137 [2024-11-29 15:58:48.503239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.503303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.503312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:37.137 [2024-11-29 15:58:48.503321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:37.137 [2024-11-29 15:58:48.503331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.503374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.503386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:37.137 [2024-11-29 15:58:48.503394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:37.137 [2024-11-29 15:58:48.503401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.504563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.504594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:37.137 [2024-11-29 15:58:48.504603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.141 ms 00:16:37.137 [2024-11-29 15:58:48.504610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.137 [2024-11-29 15:58:48.504639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.137 [2024-11-29 15:58:48.504647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:37.137 [2024-11-29 15:58:48.504658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:37.137 [2024-11-29 15:58:48.504665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.138 [2024-11-29 15:58:48.504696] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:37.138 [2024-11-29 15:58:48.504705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.138 [2024-11-29 15:58:48.504712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:37.138 [2024-11-29 15:58:48.504720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:37.138 [2024-11-29 15:58:48.504727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.138 [2024-11-29 15:58:48.528213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.138 [2024-11-29 15:58:48.528249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:37.138 [2024-11-29 15:58:48.528259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.464 ms 00:16:37.138 [2024-11-29 15:58:48.528267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.138 [2024-11-29 15:58:48.528350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.138 [2024-11-29 15:58:48.528360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:37.138 [2024-11-29 15:58:48.528368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:37.138 [2024-11-29 15:58:48.528375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.138 [2024-11-29 15:58:48.529120] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:37.138 [2024-11-29 15:58:48.532245] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 286.226 ms, result 0 00:16:37.138 [2024-11-29 15:58:48.533274] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:37.138 [2024-11-29 15:58:48.546707] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:38.523  [2024-11-29T15:58:50.900Z] Copying: 20/256 [MB] (20 MBps) [2024-11-29T15:58:51.841Z] Copying: 36/256 [MB] (15 MBps) [2024-11-29T15:58:52.784Z] Copying: 57/256 [MB] (21 MBps) [2024-11-29T15:58:53.724Z] Copying: 70/256 [MB] (13 MBps) [2024-11-29T15:58:54.664Z] Copying: 91/256 [MB] (20 MBps) [2024-11-29T15:58:55.606Z] Copying: 109/256 [MB] (17 MBps) [2024-11-29T15:58:56.987Z] Copying: 127/256 [MB] (18 MBps) [2024-11-29T15:58:57.559Z] Copying: 147/256 [MB] (19 MBps) [2024-11-29T15:58:58.942Z] Copying: 158/256 [MB] (11 MBps) [2024-11-29T15:58:59.883Z] Copying: 171/256 [MB] (12 MBps) [2024-11-29T15:59:00.825Z] Copying: 181/256 [MB] (10 MBps) [2024-11-29T15:59:01.770Z] Copying: 194/256 [MB] (12 MBps) [2024-11-29T15:59:02.715Z] Copying: 204/256 [MB] (10 MBps) [2024-11-29T15:59:03.662Z] Copying: 218/256 [MB] (13 MBps) [2024-11-29T15:59:04.608Z] Copying: 232/256 [MB] (13 MBps) [2024-11-29T15:59:05.553Z] Copying: 242/256 [MB] (10 MBps) [2024-11-29T15:59:05.553Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-29 15:59:05.401228] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:54.122 [2024-11-29 15:59:05.410678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.122 [2024-11-29 15:59:05.410716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:54.122 [2024-11-29 15:59:05.410736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:54.122 [2024-11-29 15:59:05.410744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.122 [2024-11-29 15:59:05.410765] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:54.122 [2024-11-29 15:59:05.413461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.122 [2024-11-29 15:59:05.413491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:54.122 [2024-11-29 15:59:05.413502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.683 ms 00:16:54.122 [2024-11-29 15:59:05.413509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.122 [2024-11-29 15:59:05.416263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.122 [2024-11-29 15:59:05.416299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:54.122 [2024-11-29 15:59:05.416309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.730 ms 00:16:54.122 [2024-11-29 15:59:05.416317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.122 [2024-11-29 15:59:05.424203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.122 [2024-11-29 15:59:05.424239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:54.122 [2024-11-29 15:59:05.424248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.862 ms 00:16:54.122 [2024-11-29 15:59:05.424255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.122 [2024-11-29 15:59:05.431140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.122 [2024-11-29 15:59:05.431176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:54.122 [2024-11-29 15:59:05.431186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.826 ms 00:16:54.122 [2024-11-29 15:59:05.431194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.122 [2024-11-29 15:59:05.455384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.122 [2024-11-29 15:59:05.455424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:54.122 [2024-11-29 15:59:05.455436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.140 ms 00:16:54.122 [2024-11-29 15:59:05.455443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.122 [2024-11-29 15:59:05.471667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.122 [2024-11-29 15:59:05.471716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:54.122 [2024-11-29 15:59:05.471729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.168 ms 00:16:54.122 [2024-11-29 15:59:05.471736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.122 [2024-11-29 15:59:05.471880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.122 [2024-11-29 15:59:05.471890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:54.122 [2024-11-29 15:59:05.471899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:16:54.122 [2024-11-29 15:59:05.471907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.122 [2024-11-29 15:59:05.496313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.122 [2024-11-29 15:59:05.496344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:54.122 [2024-11-29 15:59:05.496354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.389 ms 00:16:54.122 [2024-11-29 15:59:05.496361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.122 [2024-11-29 15:59:05.520380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.122 [2024-11-29 15:59:05.520411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:54.122 [2024-11-29 15:59:05.520421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.974 ms 00:16:54.122 [2024-11-29 15:59:05.520427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.122 [2024-11-29 15:59:05.543831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.123 [2024-11-29 15:59:05.543863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:54.123 [2024-11-29 15:59:05.543873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.354 ms 00:16:54.123 [2024-11-29 15:59:05.543880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.385 [2024-11-29 15:59:05.567543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.385 [2024-11-29 15:59:05.567578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:54.385 [2024-11-29 15:59:05.567589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.589 ms 00:16:54.385 [2024-11-29 15:59:05.567597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.386 [2024-11-29 15:59:05.567645] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:54.386 [2024-11-29 15:59:05.567659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.567993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:54.386 [2024-11-29 15:59:05.568310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:54.387 [2024-11-29 15:59:05.568318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:54.387 [2024-11-29 15:59:05.568325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:54.387 [2024-11-29 15:59:05.568332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:54.387 [2024-11-29 15:59:05.568340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:54.387 [2024-11-29 15:59:05.568347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:54.387 [2024-11-29 15:59:05.568355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:54.387 [2024-11-29 15:59:05.568368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:54.387 [2024-11-29 15:59:05.568375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:54.387 [2024-11-29 15:59:05.568382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:54.387 [2024-11-29 15:59:05.568390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:54.387 [2024-11-29 15:59:05.568405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:54.387 [2024-11-29 15:59:05.568412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:54.387 [2024-11-29 15:59:05.568419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:54.387 [2024-11-29 15:59:05.568435] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:54.387 [2024-11-29 15:59:05.568443] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 08798bdb-4e1e-4c21-9160-7bdd4bf640f7 00:16:54.387 [2024-11-29 15:59:05.568451] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:54.387 [2024-11-29 15:59:05.568459] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:54.387 [2024-11-29 15:59:05.568466] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:54.387 [2024-11-29 15:59:05.568475] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:54.387 [2024-11-29 15:59:05.568482] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:54.387 [2024-11-29 15:59:05.568490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:54.387 [2024-11-29 15:59:05.568500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:54.387 [2024-11-29 15:59:05.568507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:54.387 [2024-11-29 15:59:05.568514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:54.387 [2024-11-29 15:59:05.568520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.387 [2024-11-29 15:59:05.568529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:54.387 [2024-11-29 15:59:05.568536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.876 ms 00:16:54.387 [2024-11-29 15:59:05.568544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.581153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.387 [2024-11-29 15:59:05.581181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:54.387 [2024-11-29 15:59:05.581191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.582 ms 00:16:54.387 [2024-11-29 15:59:05.581203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.581412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.387 [2024-11-29 15:59:05.581421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:54.387 [2024-11-29 15:59:05.581429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:16:54.387 [2024-11-29 15:59:05.581436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.619242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.387 [2024-11-29 15:59:05.619276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:54.387 [2024-11-29 15:59:05.619286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.387 [2024-11-29 15:59:05.619298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.619375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.387 [2024-11-29 15:59:05.619383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:54.387 [2024-11-29 15:59:05.619391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.387 [2024-11-29 15:59:05.619398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.619436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.387 [2024-11-29 15:59:05.619445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:54.387 [2024-11-29 15:59:05.619453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.387 [2024-11-29 15:59:05.619460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.619479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.387 [2024-11-29 15:59:05.619487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:54.387 [2024-11-29 15:59:05.619495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.387 [2024-11-29 15:59:05.619502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.692651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.387 [2024-11-29 15:59:05.692688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:54.387 [2024-11-29 15:59:05.692698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.387 [2024-11-29 15:59:05.692709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.721850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.387 [2024-11-29 15:59:05.721883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:54.387 [2024-11-29 15:59:05.721893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.387 [2024-11-29 15:59:05.721901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.721948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.387 [2024-11-29 15:59:05.721957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:54.387 [2024-11-29 15:59:05.721965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.387 [2024-11-29 15:59:05.721989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.722018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.387 [2024-11-29 15:59:05.722031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:54.387 [2024-11-29 15:59:05.722039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.387 [2024-11-29 15:59:05.722046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.722131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.387 [2024-11-29 15:59:05.722145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:54.387 [2024-11-29 15:59:05.722153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.387 [2024-11-29 15:59:05.722161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.722188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.387 [2024-11-29 15:59:05.722199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:54.387 [2024-11-29 15:59:05.722207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.387 [2024-11-29 15:59:05.722214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.722248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.387 [2024-11-29 15:59:05.722259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:54.387 [2024-11-29 15:59:05.722266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.387 [2024-11-29 15:59:05.722273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.722314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.387 [2024-11-29 15:59:05.722328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:54.387 [2024-11-29 15:59:05.722339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.387 [2024-11-29 15:59:05.722346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.387 [2024-11-29 15:59:05.722476] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 311.803 ms, result 0 00:16:55.326 00:16:55.326 00:16:55.326 15:59:06 -- ftl/trim.sh@72 -- # svcpid=72193 00:16:55.326 15:59:06 -- ftl/trim.sh@73 -- # waitforlisten 72193 00:16:55.326 15:59:06 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:55.326 15:59:06 -- common/autotest_common.sh@829 -- # '[' -z 72193 ']' 00:16:55.326 15:59:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.326 15:59:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.326 15:59:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.326 15:59:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.326 15:59:06 -- common/autotest_common.sh@10 -- # set +x 00:16:55.326 [2024-11-29 15:59:06.602187] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:55.326 [2024-11-29 15:59:06.602311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72193 ] 00:16:55.326 [2024-11-29 15:59:06.748098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.585 [2024-11-29 15:59:06.885532] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:55.585 [2024-11-29 15:59:06.885693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.152 15:59:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.152 15:59:07 -- common/autotest_common.sh@862 -- # return 0 00:16:56.152 15:59:07 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:56.411 [2024-11-29 15:59:07.585954] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:56.411 [2024-11-29 15:59:07.586005] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:56.411 [2024-11-29 15:59:07.727152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.411 [2024-11-29 15:59:07.727183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:56.411 [2024-11-29 15:59:07.727195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:56.411 [2024-11-29 15:59:07.727202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.411 [2024-11-29 15:59:07.729319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.411 [2024-11-29 15:59:07.729346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.411 [2024-11-29 15:59:07.729355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:16:56.411 [2024-11-29 15:59:07.729361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.411 [2024-11-29 15:59:07.729426] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:56.411 [2024-11-29 15:59:07.730034] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:56.411 [2024-11-29 15:59:07.730058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.411 [2024-11-29 15:59:07.730064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.411 [2024-11-29 15:59:07.730073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.637 ms 00:16:56.411 [2024-11-29 15:59:07.730079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.411 [2024-11-29 15:59:07.731082] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:56.411 [2024-11-29 15:59:07.740988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.411 [2024-11-29 15:59:07.741012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:56.411 [2024-11-29 15:59:07.741021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.910 ms 00:16:56.411 [2024-11-29 15:59:07.741028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.411 [2024-11-29 15:59:07.741096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.411 [2024-11-29 15:59:07.741107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:56.411 [2024-11-29 15:59:07.741113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:56.411 [2024-11-29 15:59:07.741120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.411 [2024-11-29 15:59:07.745670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.411 [2024-11-29 15:59:07.745703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.411 [2024-11-29 15:59:07.745710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.510 ms 00:16:56.411 [2024-11-29 15:59:07.745718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.411 [2024-11-29 15:59:07.745784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.411 [2024-11-29 15:59:07.745793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.411 [2024-11-29 15:59:07.745800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:56.411 [2024-11-29 15:59:07.745808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.411 [2024-11-29 15:59:07.745828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.411 [2024-11-29 15:59:07.745836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:56.411 [2024-11-29 15:59:07.745842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:56.411 [2024-11-29 15:59:07.745850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.411 [2024-11-29 15:59:07.745872] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:56.411 [2024-11-29 15:59:07.748683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.411 [2024-11-29 15:59:07.748705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.411 [2024-11-29 15:59:07.748714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.817 ms 00:16:56.411 [2024-11-29 15:59:07.748719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.411 [2024-11-29 15:59:07.748751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.411 [2024-11-29 15:59:07.748758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:56.411 [2024-11-29 15:59:07.748766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:56.411 [2024-11-29 15:59:07.748773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.411 [2024-11-29 15:59:07.748790] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:56.411 [2024-11-29 15:59:07.748806] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:56.411 [2024-11-29 15:59:07.748832] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:56.411 [2024-11-29 15:59:07.748844] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:56.411 [2024-11-29 15:59:07.748901] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:56.411 [2024-11-29 15:59:07.748909] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:56.411 [2024-11-29 15:59:07.748921] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:56.411 [2024-11-29 15:59:07.748929] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:56.411 [2024-11-29 15:59:07.748937] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:56.411 [2024-11-29 15:59:07.748943] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:56.411 [2024-11-29 15:59:07.748950] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:56.411 [2024-11-29 15:59:07.748956] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:56.411 [2024-11-29 15:59:07.748965] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:56.411 [2024-11-29 15:59:07.748980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.411 [2024-11-29 15:59:07.748987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:56.411 [2024-11-29 15:59:07.748993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:16:56.411 [2024-11-29 15:59:07.749000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.411 [2024-11-29 15:59:07.749054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.411 [2024-11-29 15:59:07.749065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:56.411 [2024-11-29 15:59:07.749071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:56.411 [2024-11-29 15:59:07.749078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.411 [2024-11-29 15:59:07.749139] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:56.411 [2024-11-29 15:59:07.749148] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:56.411 [2024-11-29 15:59:07.749154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.411 [2024-11-29 15:59:07.749161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.412 [2024-11-29 15:59:07.749168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:56.412 [2024-11-29 15:59:07.749176] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:56.412 [2024-11-29 15:59:07.749182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:56.412 [2024-11-29 15:59:07.749191] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:56.412 [2024-11-29 15:59:07.749196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:56.412 [2024-11-29 15:59:07.749202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.412 [2024-11-29 15:59:07.749208] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:56.412 [2024-11-29 15:59:07.749214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:56.412 [2024-11-29 15:59:07.749219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.412 [2024-11-29 15:59:07.749226] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:56.412 [2024-11-29 15:59:07.749231] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:56.412 [2024-11-29 15:59:07.749237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.412 [2024-11-29 15:59:07.749243] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:56.412 [2024-11-29 15:59:07.749249] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:56.412 [2024-11-29 15:59:07.749254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.412 [2024-11-29 15:59:07.749260] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:56.412 [2024-11-29 15:59:07.749265] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:56.412 [2024-11-29 15:59:07.749271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:56.412 [2024-11-29 15:59:07.749276] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:56.412 [2024-11-29 15:59:07.749284] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:56.412 [2024-11-29 15:59:07.749289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.412 [2024-11-29 15:59:07.749299] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:56.412 [2024-11-29 15:59:07.749304] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:56.412 [2024-11-29 15:59:07.749311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.412 [2024-11-29 15:59:07.749316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:56.412 [2024-11-29 15:59:07.749322] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:56.412 [2024-11-29 15:59:07.749327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.412 [2024-11-29 15:59:07.749334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:56.412 [2024-11-29 15:59:07.749339] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:56.412 [2024-11-29 15:59:07.749345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:56.412 [2024-11-29 15:59:07.749350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:56.412 [2024-11-29 15:59:07.749356] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:56.412 [2024-11-29 15:59:07.749361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.412 [2024-11-29 15:59:07.749369] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:56.412 [2024-11-29 15:59:07.749374] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:56.412 [2024-11-29 15:59:07.749381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.412 [2024-11-29 15:59:07.749386] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:56.412 [2024-11-29 15:59:07.749395] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:56.412 [2024-11-29 15:59:07.749400] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.412 [2024-11-29 15:59:07.749407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.412 [2024-11-29 15:59:07.749413] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:56.412 [2024-11-29 15:59:07.749420] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:56.412 [2024-11-29 15:59:07.749425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:56.412 [2024-11-29 15:59:07.749431] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:56.412 [2024-11-29 15:59:07.749436] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:56.412 [2024-11-29 15:59:07.749443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:56.412 [2024-11-29 15:59:07.749449] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:56.412 [2024-11-29 15:59:07.749457] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.412 [2024-11-29 15:59:07.749463] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:56.412 [2024-11-29 15:59:07.749470] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:56.412 [2024-11-29 15:59:07.749475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:56.412 [2024-11-29 15:59:07.749484] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:56.412 [2024-11-29 15:59:07.749489] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:56.412 [2024-11-29 15:59:07.749496] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:56.412 [2024-11-29 15:59:07.749502] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:56.412 [2024-11-29 15:59:07.749508] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:56.412 [2024-11-29 15:59:07.749514] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:56.412 [2024-11-29 15:59:07.749520] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:56.412 [2024-11-29 15:59:07.749526] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:56.412 [2024-11-29 15:59:07.749533] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:56.412 [2024-11-29 15:59:07.749539] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:56.412 [2024-11-29 15:59:07.749545] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:56.412 [2024-11-29 15:59:07.749551] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.412 [2024-11-29 15:59:07.749559] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:56.412 [2024-11-29 15:59:07.749564] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:56.412 [2024-11-29 15:59:07.749572] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:56.412 [2024-11-29 15:59:07.749578] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:56.412 [2024-11-29 15:59:07.749586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.412 [2024-11-29 15:59:07.749592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:56.412 [2024-11-29 15:59:07.749599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:16:56.412 [2024-11-29 15:59:07.749605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.412 [2024-11-29 15:59:07.761726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.412 [2024-11-29 15:59:07.761750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.412 [2024-11-29 15:59:07.761761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.081 ms 00:16:56.412 [2024-11-29 15:59:07.761769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.412 [2024-11-29 15:59:07.761862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.412 [2024-11-29 15:59:07.761869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:56.412 [2024-11-29 15:59:07.761877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:56.412 [2024-11-29 15:59:07.761883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.412 [2024-11-29 15:59:07.786684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.412 [2024-11-29 15:59:07.786707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.412 [2024-11-29 15:59:07.786716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.783 ms 00:16:56.412 [2024-11-29 15:59:07.786723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.412 [2024-11-29 15:59:07.786770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.412 [2024-11-29 15:59:07.786779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.412 [2024-11-29 15:59:07.786787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:56.412 [2024-11-29 15:59:07.786794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.412 [2024-11-29 15:59:07.787110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.412 [2024-11-29 15:59:07.787126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.412 [2024-11-29 15:59:07.787136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:16:56.412 [2024-11-29 15:59:07.787142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.412 [2024-11-29 15:59:07.787234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.412 [2024-11-29 15:59:07.787244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.412 [2024-11-29 15:59:07.787253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:56.412 [2024-11-29 15:59:07.787259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.412 [2024-11-29 15:59:07.799235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.412 [2024-11-29 15:59:07.799256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.412 [2024-11-29 15:59:07.799266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.959 ms 00:16:56.412 [2024-11-29 15:59:07.799272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.412 [2024-11-29 15:59:07.808861] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:56.413 [2024-11-29 15:59:07.808888] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:56.413 [2024-11-29 15:59:07.808898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.413 [2024-11-29 15:59:07.808905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:56.413 [2024-11-29 15:59:07.808913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.550 ms 00:16:56.413 [2024-11-29 15:59:07.808918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.413 [2024-11-29 15:59:07.827674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.413 [2024-11-29 15:59:07.827700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:56.413 [2024-11-29 15:59:07.827710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.686 ms 00:16:56.413 [2024-11-29 15:59:07.827717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.413 [2024-11-29 15:59:07.836845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.413 [2024-11-29 15:59:07.836872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:56.413 [2024-11-29 15:59:07.836881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.073 ms 00:16:56.413 [2024-11-29 15:59:07.836887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-11-29 15:59:07.845593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-11-29 15:59:07.845616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:56.672 [2024-11-29 15:59:07.845627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.663 ms 00:16:56.672 [2024-11-29 15:59:07.845632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-11-29 15:59:07.845914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-11-29 15:59:07.845923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:56.672 [2024-11-29 15:59:07.845932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:16:56.672 [2024-11-29 15:59:07.845938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-11-29 15:59:07.891603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-11-29 15:59:07.891633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:56.672 [2024-11-29 15:59:07.891646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.645 ms 00:16:56.672 [2024-11-29 15:59:07.891652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-11-29 15:59:07.899757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:56.672 [2024-11-29 15:59:07.911296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-11-29 15:59:07.911324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:56.672 [2024-11-29 15:59:07.911333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.584 ms 00:16:56.672 [2024-11-29 15:59:07.911341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-11-29 15:59:07.911388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-11-29 15:59:07.911398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:56.672 [2024-11-29 15:59:07.911404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:56.672 [2024-11-29 15:59:07.911413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-11-29 15:59:07.911449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-11-29 15:59:07.911457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:56.672 [2024-11-29 15:59:07.911464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:16:56.672 [2024-11-29 15:59:07.911470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-11-29 15:59:07.912389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-11-29 15:59:07.912410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:56.672 [2024-11-29 15:59:07.912418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:16:56.672 [2024-11-29 15:59:07.912425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-11-29 15:59:07.912449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-11-29 15:59:07.912457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:56.672 [2024-11-29 15:59:07.912463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:56.672 [2024-11-29 15:59:07.912470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-11-29 15:59:07.912496] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:56.672 [2024-11-29 15:59:07.912506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-11-29 15:59:07.912512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:56.672 [2024-11-29 15:59:07.912519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:56.672 [2024-11-29 15:59:07.912524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-11-29 15:59:07.930705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-11-29 15:59:07.930729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:56.672 [2024-11-29 15:59:07.930739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.162 ms 00:16:56.672 [2024-11-29 15:59:07.930745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-11-29 15:59:07.930814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-11-29 15:59:07.930822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:56.672 [2024-11-29 15:59:07.930830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:56.672 [2024-11-29 15:59:07.930837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-11-29 15:59:07.931448] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:56.672 [2024-11-29 15:59:07.933881] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 204.088 ms, result 0 00:16:56.672 [2024-11-29 15:59:07.935247] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:56.672 Some configs were skipped because the RPC state that can call them passed over. 00:16:56.672 15:59:07 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:56.930 [2024-11-29 15:59:08.152596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.930 [2024-11-29 15:59:08.152631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:56.930 [2024-11-29 15:59:08.152640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.445 ms 00:16:56.930 [2024-11-29 15:59:08.152648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.930 [2024-11-29 15:59:08.152675] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.524 ms, result 0 00:16:56.930 true 00:16:56.930 15:59:08 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:57.189 [2024-11-29 15:59:08.367232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.189 [2024-11-29 15:59:08.367261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:57.189 [2024-11-29 15:59:08.367270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.379 ms 00:16:57.189 [2024-11-29 15:59:08.367276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.189 [2024-11-29 15:59:08.367303] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.450 ms, result 0 00:16:57.189 true 00:16:57.189 15:59:08 -- ftl/trim.sh@81 -- # killprocess 72193 00:16:57.189 15:59:08 -- common/autotest_common.sh@936 -- # '[' -z 72193 ']' 00:16:57.189 15:59:08 -- common/autotest_common.sh@940 -- # kill -0 72193 00:16:57.189 15:59:08 -- common/autotest_common.sh@941 -- # uname 00:16:57.189 15:59:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:57.189 15:59:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72193 00:16:57.189 killing process with pid 72193 00:16:57.189 15:59:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:57.189 15:59:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:57.189 15:59:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72193' 00:16:57.189 15:59:08 -- common/autotest_common.sh@955 -- # kill 72193 00:16:57.189 15:59:08 -- common/autotest_common.sh@960 -- # wait 72193 00:16:57.758 [2024-11-29 15:59:08.943252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.758 [2024-11-29 15:59:08.943301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:57.758 [2024-11-29 15:59:08.943311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:57.758 [2024-11-29 15:59:08.943318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.758 [2024-11-29 15:59:08.943337] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:57.758 [2024-11-29 15:59:08.945423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.758 [2024-11-29 15:59:08.945449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:57.758 [2024-11-29 15:59:08.945460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.072 ms 00:16:57.758 [2024-11-29 15:59:08.945466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.758 [2024-11-29 15:59:08.945716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.758 [2024-11-29 15:59:08.945733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:57.758 [2024-11-29 15:59:08.945741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:16:57.758 [2024-11-29 15:59:08.945747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.758 [2024-11-29 15:59:08.949054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.758 [2024-11-29 15:59:08.949080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:57.758 [2024-11-29 15:59:08.949090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.290 ms 00:16:57.758 [2024-11-29 15:59:08.949096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.758 [2024-11-29 15:59:08.954383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.758 [2024-11-29 15:59:08.954417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:57.758 [2024-11-29 15:59:08.954426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.261 ms 00:16:57.758 [2024-11-29 15:59:08.954433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.758 [2024-11-29 15:59:08.961911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.758 [2024-11-29 15:59:08.961937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:57.758 [2024-11-29 15:59:08.961947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.420 ms 00:16:57.758 [2024-11-29 15:59:08.961952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.758 [2024-11-29 15:59:08.968742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.758 [2024-11-29 15:59:08.968773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:57.758 [2024-11-29 15:59:08.968782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.750 ms 00:16:57.759 [2024-11-29 15:59:08.968788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.759 [2024-11-29 15:59:08.968896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.759 [2024-11-29 15:59:08.968903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:57.759 [2024-11-29 15:59:08.968911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:57.759 [2024-11-29 15:59:08.968917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.759 [2024-11-29 15:59:08.976773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.759 [2024-11-29 15:59:08.976798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:57.759 [2024-11-29 15:59:08.976807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.839 ms 00:16:57.759 [2024-11-29 15:59:08.976812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.759 [2024-11-29 15:59:08.984387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.759 [2024-11-29 15:59:08.984413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:57.759 [2024-11-29 15:59:08.984425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.544 ms 00:16:57.759 [2024-11-29 15:59:08.984430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.759 [2024-11-29 15:59:08.991686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.759 [2024-11-29 15:59:08.991712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:57.759 [2024-11-29 15:59:08.991720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.225 ms 00:16:57.759 [2024-11-29 15:59:08.991725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.759 [2024-11-29 15:59:08.998831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.759 [2024-11-29 15:59:08.998857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:57.759 [2024-11-29 15:59:08.998865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.054 ms 00:16:57.759 [2024-11-29 15:59:08.998871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.759 [2024-11-29 15:59:08.998906] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:57.759 [2024-11-29 15:59:08.998918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.998928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.998934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.998941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.998947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.998955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.998961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.998968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.998982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.998990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.998995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:57.759 [2024-11-29 15:59:08.999247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:57.760 [2024-11-29 15:59:08.999569] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:57.760 [2024-11-29 15:59:08.999577] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 08798bdb-4e1e-4c21-9160-7bdd4bf640f7 00:16:57.760 [2024-11-29 15:59:08.999583] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:57.760 [2024-11-29 15:59:08.999590] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:57.760 [2024-11-29 15:59:08.999595] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:57.760 [2024-11-29 15:59:08.999603] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:57.760 [2024-11-29 15:59:08.999608] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:57.760 [2024-11-29 15:59:08.999615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:57.760 [2024-11-29 15:59:08.999620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:57.760 [2024-11-29 15:59:08.999626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:57.760 [2024-11-29 15:59:08.999631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:57.760 [2024-11-29 15:59:08.999638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.760 [2024-11-29 15:59:08.999643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:57.760 [2024-11-29 15:59:08.999651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:16:57.760 [2024-11-29 15:59:08.999657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.760 [2024-11-29 15:59:09.009198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.760 [2024-11-29 15:59:09.009225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:57.760 [2024-11-29 15:59:09.009236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.523 ms 00:16:57.760 [2024-11-29 15:59:09.009242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.760 [2024-11-29 15:59:09.009407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.760 [2024-11-29 15:59:09.009420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:57.760 [2024-11-29 15:59:09.009429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:16:57.760 [2024-11-29 15:59:09.009435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.760 [2024-11-29 15:59:09.044439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.760 [2024-11-29 15:59:09.044467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:57.761 [2024-11-29 15:59:09.044476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.761 [2024-11-29 15:59:09.044483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.761 [2024-11-29 15:59:09.044544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.761 [2024-11-29 15:59:09.044550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:57.761 [2024-11-29 15:59:09.044560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.761 [2024-11-29 15:59:09.044565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.761 [2024-11-29 15:59:09.044597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.761 [2024-11-29 15:59:09.044604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:57.761 [2024-11-29 15:59:09.044613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.761 [2024-11-29 15:59:09.044618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.761 [2024-11-29 15:59:09.044632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.761 [2024-11-29 15:59:09.044639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:57.761 [2024-11-29 15:59:09.044645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.761 [2024-11-29 15:59:09.044652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.761 [2024-11-29 15:59:09.104577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.761 [2024-11-29 15:59:09.104613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:57.761 [2024-11-29 15:59:09.104623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.761 [2024-11-29 15:59:09.104630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.761 [2024-11-29 15:59:09.126953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.761 [2024-11-29 15:59:09.126991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:57.761 [2024-11-29 15:59:09.127000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.761 [2024-11-29 15:59:09.127008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.761 [2024-11-29 15:59:09.127048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.761 [2024-11-29 15:59:09.127054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:57.761 [2024-11-29 15:59:09.127063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.761 [2024-11-29 15:59:09.127069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.761 [2024-11-29 15:59:09.127095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.761 [2024-11-29 15:59:09.127101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:57.761 [2024-11-29 15:59:09.127108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.761 [2024-11-29 15:59:09.127114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.761 [2024-11-29 15:59:09.127184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.761 [2024-11-29 15:59:09.127192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:57.761 [2024-11-29 15:59:09.127199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.761 [2024-11-29 15:59:09.127205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.761 [2024-11-29 15:59:09.127230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.761 [2024-11-29 15:59:09.127236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:57.761 [2024-11-29 15:59:09.127243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.761 [2024-11-29 15:59:09.127248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.761 [2024-11-29 15:59:09.127279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.761 [2024-11-29 15:59:09.127285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:57.761 [2024-11-29 15:59:09.127294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.761 [2024-11-29 15:59:09.127299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.761 [2024-11-29 15:59:09.127334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.761 [2024-11-29 15:59:09.127341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:57.761 [2024-11-29 15:59:09.127348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.761 [2024-11-29 15:59:09.127353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.761 [2024-11-29 15:59:09.127458] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 184.189 ms, result 0 00:16:58.700 15:59:09 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:58.700 15:59:09 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:58.700 [2024-11-29 15:59:09.831551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:58.700 [2024-11-29 15:59:09.831672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72240 ] 00:16:58.700 [2024-11-29 15:59:09.977406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.700 [2024-11-29 15:59:10.118600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.959 [2024-11-29 15:59:10.323128] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.959 [2024-11-29 15:59:10.323176] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:59.219 [2024-11-29 15:59:10.470669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.219 [2024-11-29 15:59:10.470709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:59.219 [2024-11-29 15:59:10.470719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:59.219 [2024-11-29 15:59:10.470725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.219 [2024-11-29 15:59:10.472728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.219 [2024-11-29 15:59:10.472759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:59.219 [2024-11-29 15:59:10.472767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.991 ms 00:16:59.219 [2024-11-29 15:59:10.472773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.219 [2024-11-29 15:59:10.472826] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:59.219 [2024-11-29 15:59:10.473379] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:59.219 [2024-11-29 15:59:10.473401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.219 [2024-11-29 15:59:10.473407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:59.219 [2024-11-29 15:59:10.473414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:16:59.219 [2024-11-29 15:59:10.473420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.219 [2024-11-29 15:59:10.474396] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:59.219 [2024-11-29 15:59:10.484465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.219 [2024-11-29 15:59:10.484492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:59.219 [2024-11-29 15:59:10.484501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.070 ms 00:16:59.219 [2024-11-29 15:59:10.484507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.219 [2024-11-29 15:59:10.484575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.219 [2024-11-29 15:59:10.484583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:59.219 [2024-11-29 15:59:10.484590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:59.219 [2024-11-29 15:59:10.484595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.219 [2024-11-29 15:59:10.488988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.219 [2024-11-29 15:59:10.489013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:59.219 [2024-11-29 15:59:10.489020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.363 ms 00:16:59.219 [2024-11-29 15:59:10.489028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.219 [2024-11-29 15:59:10.489101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.219 [2024-11-29 15:59:10.489109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:59.219 [2024-11-29 15:59:10.489115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:59.219 [2024-11-29 15:59:10.489121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.219 [2024-11-29 15:59:10.489138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.219 [2024-11-29 15:59:10.489145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:59.219 [2024-11-29 15:59:10.489151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:59.219 [2024-11-29 15:59:10.489157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.219 [2024-11-29 15:59:10.489181] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:59.219 [2024-11-29 15:59:10.491934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.219 [2024-11-29 15:59:10.491958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:59.219 [2024-11-29 15:59:10.491965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.763 ms 00:16:59.219 [2024-11-29 15:59:10.491980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.219 [2024-11-29 15:59:10.492009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.219 [2024-11-29 15:59:10.492016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:59.219 [2024-11-29 15:59:10.492022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:59.219 [2024-11-29 15:59:10.492027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.219 [2024-11-29 15:59:10.492041] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:59.219 [2024-11-29 15:59:10.492054] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:59.219 [2024-11-29 15:59:10.492079] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:59.219 [2024-11-29 15:59:10.492092] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:59.219 [2024-11-29 15:59:10.492148] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:59.219 [2024-11-29 15:59:10.492157] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:59.219 [2024-11-29 15:59:10.492164] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:59.219 [2024-11-29 15:59:10.492172] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:59.219 [2024-11-29 15:59:10.492179] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:59.219 [2024-11-29 15:59:10.492185] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:59.219 [2024-11-29 15:59:10.492190] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:59.219 [2024-11-29 15:59:10.492196] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:59.219 [2024-11-29 15:59:10.492203] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:59.219 [2024-11-29 15:59:10.492208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.219 [2024-11-29 15:59:10.492214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:59.219 [2024-11-29 15:59:10.492220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:16:59.219 [2024-11-29 15:59:10.492226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.219 [2024-11-29 15:59:10.492274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.219 [2024-11-29 15:59:10.492280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:59.219 [2024-11-29 15:59:10.492286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:59.219 [2024-11-29 15:59:10.492292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.219 [2024-11-29 15:59:10.492347] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:59.219 [2024-11-29 15:59:10.492355] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:59.219 [2024-11-29 15:59:10.492361] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.219 [2024-11-29 15:59:10.492367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.219 [2024-11-29 15:59:10.492372] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:59.219 [2024-11-29 15:59:10.492377] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:59.219 [2024-11-29 15:59:10.492382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:59.219 [2024-11-29 15:59:10.492387] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:59.219 [2024-11-29 15:59:10.492393] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:59.219 [2024-11-29 15:59:10.492398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.219 [2024-11-29 15:59:10.492403] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:59.219 [2024-11-29 15:59:10.492408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:59.219 [2024-11-29 15:59:10.492413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.219 [2024-11-29 15:59:10.492419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:59.219 [2024-11-29 15:59:10.492429] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:59.219 [2024-11-29 15:59:10.492435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.219 [2024-11-29 15:59:10.492440] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:59.219 [2024-11-29 15:59:10.492446] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:59.219 [2024-11-29 15:59:10.492451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.219 [2024-11-29 15:59:10.492456] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:59.219 [2024-11-29 15:59:10.492461] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:59.219 [2024-11-29 15:59:10.492466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:59.219 [2024-11-29 15:59:10.492471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:59.219 [2024-11-29 15:59:10.492476] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:59.219 [2024-11-29 15:59:10.492480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:59.219 [2024-11-29 15:59:10.492485] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:59.219 [2024-11-29 15:59:10.492490] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:59.220 [2024-11-29 15:59:10.492495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:59.220 [2024-11-29 15:59:10.492500] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:59.220 [2024-11-29 15:59:10.492504] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:59.220 [2024-11-29 15:59:10.492509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:59.220 [2024-11-29 15:59:10.492513] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:59.220 [2024-11-29 15:59:10.492518] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:59.220 [2024-11-29 15:59:10.492523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:59.220 [2024-11-29 15:59:10.492528] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:59.220 [2024-11-29 15:59:10.492532] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:59.220 [2024-11-29 15:59:10.492537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.220 [2024-11-29 15:59:10.492543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:59.220 [2024-11-29 15:59:10.492548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:59.220 [2024-11-29 15:59:10.492552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.220 [2024-11-29 15:59:10.492557] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:59.220 [2024-11-29 15:59:10.492563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:59.220 [2024-11-29 15:59:10.492568] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.220 [2024-11-29 15:59:10.492580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.220 [2024-11-29 15:59:10.492586] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:59.220 [2024-11-29 15:59:10.492592] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:59.220 [2024-11-29 15:59:10.492597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:59.220 [2024-11-29 15:59:10.492602] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:59.220 [2024-11-29 15:59:10.492607] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:59.220 [2024-11-29 15:59:10.492612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:59.220 [2024-11-29 15:59:10.492618] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:59.220 [2024-11-29 15:59:10.492624] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.220 [2024-11-29 15:59:10.492631] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:59.220 [2024-11-29 15:59:10.492636] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:59.220 [2024-11-29 15:59:10.492642] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:59.220 [2024-11-29 15:59:10.492647] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:59.220 [2024-11-29 15:59:10.492652] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:59.220 [2024-11-29 15:59:10.492657] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:59.220 [2024-11-29 15:59:10.492662] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:59.220 [2024-11-29 15:59:10.492667] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:59.220 [2024-11-29 15:59:10.492672] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:59.220 [2024-11-29 15:59:10.492678] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:59.220 [2024-11-29 15:59:10.492683] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:59.220 [2024-11-29 15:59:10.492689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:59.220 [2024-11-29 15:59:10.492694] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:59.220 [2024-11-29 15:59:10.492699] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:59.220 [2024-11-29 15:59:10.492707] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.220 [2024-11-29 15:59:10.492713] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:59.220 [2024-11-29 15:59:10.492719] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:59.220 [2024-11-29 15:59:10.492724] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:59.220 [2024-11-29 15:59:10.492729] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:59.220 [2024-11-29 15:59:10.492734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.220 [2024-11-29 15:59:10.492740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:59.220 [2024-11-29 15:59:10.492745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:16:59.220 [2024-11-29 15:59:10.492750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.220 [2024-11-29 15:59:10.504614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.220 [2024-11-29 15:59:10.504642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:59.220 [2024-11-29 15:59:10.504650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.833 ms 00:16:59.220 [2024-11-29 15:59:10.504655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.220 [2024-11-29 15:59:10.504742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.220 [2024-11-29 15:59:10.504749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:59.220 [2024-11-29 15:59:10.504755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:59.220 [2024-11-29 15:59:10.504760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.220 [2024-11-29 15:59:10.543294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.220 [2024-11-29 15:59:10.543327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:59.220 [2024-11-29 15:59:10.543338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.518 ms 00:16:59.220 [2024-11-29 15:59:10.543344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.220 [2024-11-29 15:59:10.543401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.220 [2024-11-29 15:59:10.543409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:59.220 [2024-11-29 15:59:10.543419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:59.220 [2024-11-29 15:59:10.543425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.220 [2024-11-29 15:59:10.543700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.220 [2024-11-29 15:59:10.543718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:59.220 [2024-11-29 15:59:10.543725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:16:59.220 [2024-11-29 15:59:10.543730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.220 [2024-11-29 15:59:10.543823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.220 [2024-11-29 15:59:10.543830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:59.220 [2024-11-29 15:59:10.543836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:59.220 [2024-11-29 15:59:10.543841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.220 [2024-11-29 15:59:10.555163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.220 [2024-11-29 15:59:10.555189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:59.220 [2024-11-29 15:59:10.555197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.304 ms 00:16:59.220 [2024-11-29 15:59:10.555205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.220 [2024-11-29 15:59:10.565349] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:59.220 [2024-11-29 15:59:10.565378] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:59.220 [2024-11-29 15:59:10.565387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.220 [2024-11-29 15:59:10.565393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:59.220 [2024-11-29 15:59:10.565400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.107 ms 00:16:59.220 [2024-11-29 15:59:10.565406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.220 [2024-11-29 15:59:10.584341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.220 [2024-11-29 15:59:10.584372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:59.220 [2024-11-29 15:59:10.584381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.881 ms 00:16:59.220 [2024-11-29 15:59:10.584387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.220 [2024-11-29 15:59:10.593995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.220 [2024-11-29 15:59:10.594022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:59.220 [2024-11-29 15:59:10.594034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.554 ms 00:16:59.220 [2024-11-29 15:59:10.594040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.220 [2024-11-29 15:59:10.603200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.220 [2024-11-29 15:59:10.603226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:59.220 [2024-11-29 15:59:10.603233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.120 ms 00:16:59.220 [2024-11-29 15:59:10.603238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.220 [2024-11-29 15:59:10.603506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.220 [2024-11-29 15:59:10.603528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:59.220 [2024-11-29 15:59:10.603535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:16:59.220 [2024-11-29 15:59:10.603542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.482 [2024-11-29 15:59:10.649568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.482 [2024-11-29 15:59:10.649609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:59.482 [2024-11-29 15:59:10.649619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.008 ms 00:16:59.482 [2024-11-29 15:59:10.649629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.482 [2024-11-29 15:59:10.657433] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:59.482 [2024-11-29 15:59:10.668862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.482 [2024-11-29 15:59:10.668890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:59.482 [2024-11-29 15:59:10.668899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.159 ms 00:16:59.482 [2024-11-29 15:59:10.668905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.482 [2024-11-29 15:59:10.668957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.482 [2024-11-29 15:59:10.668965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:59.482 [2024-11-29 15:59:10.668984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:59.482 [2024-11-29 15:59:10.668990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.482 [2024-11-29 15:59:10.669027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.482 [2024-11-29 15:59:10.669033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:59.482 [2024-11-29 15:59:10.669039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:59.482 [2024-11-29 15:59:10.669045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.482 [2024-11-29 15:59:10.669960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.482 [2024-11-29 15:59:10.670002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:59.482 [2024-11-29 15:59:10.670010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:16:59.482 [2024-11-29 15:59:10.670016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.482 [2024-11-29 15:59:10.670040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.482 [2024-11-29 15:59:10.670049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:59.482 [2024-11-29 15:59:10.670055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:59.482 [2024-11-29 15:59:10.670061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.482 [2024-11-29 15:59:10.670086] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:59.482 [2024-11-29 15:59:10.670093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.482 [2024-11-29 15:59:10.670099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:59.482 [2024-11-29 15:59:10.670104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:59.482 [2024-11-29 15:59:10.670110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.482 [2024-11-29 15:59:10.688635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.482 [2024-11-29 15:59:10.688663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:59.482 [2024-11-29 15:59:10.688671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.510 ms 00:16:59.482 [2024-11-29 15:59:10.688677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.482 [2024-11-29 15:59:10.688749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.482 [2024-11-29 15:59:10.688757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:59.482 [2024-11-29 15:59:10.688763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:59.482 [2024-11-29 15:59:10.688769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.482 [2024-11-29 15:59:10.689456] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:59.482 [2024-11-29 15:59:10.691881] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 218.577 ms, result 0 00:16:59.482 [2024-11-29 15:59:10.692880] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:59.482 [2024-11-29 15:59:10.703938] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:00.427  [2024-11-29T15:59:12.802Z] Copying: 21/256 [MB] (21 MBps) [2024-11-29T15:59:13.747Z] Copying: 39/256 [MB] (17 MBps) [2024-11-29T15:59:15.165Z] Copying: 55/256 [MB] (16 MBps) [2024-11-29T15:59:15.744Z] Copying: 77/256 [MB] (22 MBps) [2024-11-29T15:59:17.132Z] Copying: 88/256 [MB] (10 MBps) [2024-11-29T15:59:18.078Z] Copying: 99/256 [MB] (10 MBps) [2024-11-29T15:59:19.024Z] Copying: 111/256 [MB] (12 MBps) [2024-11-29T15:59:19.969Z] Copying: 126/256 [MB] (14 MBps) [2024-11-29T15:59:20.911Z] Copying: 144/256 [MB] (17 MBps) [2024-11-29T15:59:21.856Z] Copying: 162/256 [MB] (18 MBps) [2024-11-29T15:59:22.796Z] Copying: 174/256 [MB] (12 MBps) [2024-11-29T15:59:23.741Z] Copying: 193/256 [MB] (18 MBps) [2024-11-29T15:59:25.130Z] Copying: 204/256 [MB] (10 MBps) [2024-11-29T15:59:26.075Z] Copying: 214/256 [MB] (10 MBps) [2024-11-29T15:59:27.021Z] Copying: 231/256 [MB] (17 MBps) [2024-11-29T15:59:27.594Z] Copying: 246/256 [MB] (15 MBps) [2024-11-29T15:59:27.594Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-29 15:59:27.396856] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:16.163 [2024-11-29 15:59:27.407024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.163 [2024-11-29 15:59:27.407079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:16.163 [2024-11-29 15:59:27.407094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:16.163 [2024-11-29 15:59:27.407102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.163 [2024-11-29 15:59:27.407127] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:16.163 [2024-11-29 15:59:27.410115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.163 [2024-11-29 15:59:27.410154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:16.163 [2024-11-29 15:59:27.410166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.973 ms 00:17:16.163 [2024-11-29 15:59:27.410174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.163 [2024-11-29 15:59:27.410453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.163 [2024-11-29 15:59:27.410465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:16.163 [2024-11-29 15:59:27.410474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:17:16.163 [2024-11-29 15:59:27.410487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.163 [2024-11-29 15:59:27.414210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.163 [2024-11-29 15:59:27.414233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:16.163 [2024-11-29 15:59:27.414244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.707 ms 00:17:16.163 [2024-11-29 15:59:27.414253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.163 [2024-11-29 15:59:27.421307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.164 [2024-11-29 15:59:27.421349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:16.164 [2024-11-29 15:59:27.421360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.020 ms 00:17:16.164 [2024-11-29 15:59:27.421367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.164 [2024-11-29 15:59:27.447279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.164 [2024-11-29 15:59:27.447332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:16.164 [2024-11-29 15:59:27.447345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.836 ms 00:17:16.164 [2024-11-29 15:59:27.447352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.164 [2024-11-29 15:59:27.463524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.164 [2024-11-29 15:59:27.463570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:16.164 [2024-11-29 15:59:27.463583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.105 ms 00:17:16.164 [2024-11-29 15:59:27.463591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.164 [2024-11-29 15:59:27.463758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.164 [2024-11-29 15:59:27.463773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:16.164 [2024-11-29 15:59:27.463784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:16.164 [2024-11-29 15:59:27.463792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.164 [2024-11-29 15:59:27.490113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.164 [2024-11-29 15:59:27.490160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:16.164 [2024-11-29 15:59:27.490171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.303 ms 00:17:16.164 [2024-11-29 15:59:27.490178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.164 [2024-11-29 15:59:27.515871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.164 [2024-11-29 15:59:27.515914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:16.164 [2024-11-29 15:59:27.515925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.618 ms 00:17:16.164 [2024-11-29 15:59:27.515932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.164 [2024-11-29 15:59:27.541038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.164 [2024-11-29 15:59:27.541084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:16.164 [2024-11-29 15:59:27.541095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.020 ms 00:17:16.164 [2024-11-29 15:59:27.541102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.164 [2024-11-29 15:59:27.565928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.164 [2024-11-29 15:59:27.565981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:16.164 [2024-11-29 15:59:27.565993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.731 ms 00:17:16.164 [2024-11-29 15:59:27.565999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.164 [2024-11-29 15:59:27.566062] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:16.164 [2024-11-29 15:59:27.566079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:16.164 [2024-11-29 15:59:27.566563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:16.165 [2024-11-29 15:59:27.566866] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:16.165 [2024-11-29 15:59:27.566874] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 08798bdb-4e1e-4c21-9160-7bdd4bf640f7 00:17:16.165 [2024-11-29 15:59:27.566882] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:16.165 [2024-11-29 15:59:27.566890] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:16.165 [2024-11-29 15:59:27.566898] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:16.165 [2024-11-29 15:59:27.566906] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:16.165 [2024-11-29 15:59:27.566914] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:16.165 [2024-11-29 15:59:27.566925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:16.165 [2024-11-29 15:59:27.566932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:16.165 [2024-11-29 15:59:27.566939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:16.165 [2024-11-29 15:59:27.566946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:16.165 [2024-11-29 15:59:27.566953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.165 [2024-11-29 15:59:27.566962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:16.165 [2024-11-29 15:59:27.566983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:17:16.165 [2024-11-29 15:59:27.567012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.165 [2024-11-29 15:59:27.580563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.165 [2024-11-29 15:59:27.580606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:16.165 [2024-11-29 15:59:27.580625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.516 ms 00:17:16.165 [2024-11-29 15:59:27.580633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.165 [2024-11-29 15:59:27.580876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.165 [2024-11-29 15:59:27.580888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:16.165 [2024-11-29 15:59:27.580896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:17:16.165 [2024-11-29 15:59:27.580904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.428 [2024-11-29 15:59:27.622360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.428 [2024-11-29 15:59:27.622409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:16.428 [2024-11-29 15:59:27.622427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.428 [2024-11-29 15:59:27.622435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.428 [2024-11-29 15:59:27.622533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.428 [2024-11-29 15:59:27.622543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:16.428 [2024-11-29 15:59:27.622552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.428 [2024-11-29 15:59:27.622563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.428 [2024-11-29 15:59:27.622613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.428 [2024-11-29 15:59:27.622624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:16.428 [2024-11-29 15:59:27.622632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.428 [2024-11-29 15:59:27.622645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.428 [2024-11-29 15:59:27.622663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.428 [2024-11-29 15:59:27.622670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:16.428 [2024-11-29 15:59:27.622678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.428 [2024-11-29 15:59:27.622686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.428 [2024-11-29 15:59:27.705247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.428 [2024-11-29 15:59:27.705312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:16.428 [2024-11-29 15:59:27.705330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.428 [2024-11-29 15:59:27.705339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.428 [2024-11-29 15:59:27.738282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.428 [2024-11-29 15:59:27.738327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:16.428 [2024-11-29 15:59:27.738339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.428 [2024-11-29 15:59:27.738348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.428 [2024-11-29 15:59:27.738414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.428 [2024-11-29 15:59:27.738424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:16.428 [2024-11-29 15:59:27.738433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.428 [2024-11-29 15:59:27.738441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.428 [2024-11-29 15:59:27.738481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.428 [2024-11-29 15:59:27.738491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:16.428 [2024-11-29 15:59:27.738498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.428 [2024-11-29 15:59:27.738507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.428 [2024-11-29 15:59:27.738604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.428 [2024-11-29 15:59:27.738616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:16.428 [2024-11-29 15:59:27.738628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.428 [2024-11-29 15:59:27.738638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.428 [2024-11-29 15:59:27.738679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.428 [2024-11-29 15:59:27.738689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:16.428 [2024-11-29 15:59:27.738698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.428 [2024-11-29 15:59:27.738707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.428 [2024-11-29 15:59:27.738751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.428 [2024-11-29 15:59:27.738763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:16.428 [2024-11-29 15:59:27.738772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.428 [2024-11-29 15:59:27.738780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.428 [2024-11-29 15:59:27.738835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.428 [2024-11-29 15:59:27.738850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:16.428 [2024-11-29 15:59:27.738859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.428 [2024-11-29 15:59:27.738867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.428 [2024-11-29 15:59:27.739054] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.052 ms, result 0 00:17:17.365 00:17:17.365 00:17:17.365 15:59:28 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:17.365 15:59:28 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:17.626 15:59:29 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:17.885 [2024-11-29 15:59:29.077109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:17.885 [2024-11-29 15:59:29.077252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72453 ] 00:17:17.885 [2024-11-29 15:59:29.227944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.143 [2024-11-29 15:59:29.364410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.144 [2024-11-29 15:59:29.568163] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:18.144 [2024-11-29 15:59:29.568213] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:18.403 [2024-11-29 15:59:29.711393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.403 [2024-11-29 15:59:29.711433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:18.403 [2024-11-29 15:59:29.711444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:18.403 [2024-11-29 15:59:29.711450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.403 [2024-11-29 15:59:29.713483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.403 [2024-11-29 15:59:29.713516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.403 [2024-11-29 15:59:29.713524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.022 ms 00:17:18.403 [2024-11-29 15:59:29.713530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.403 [2024-11-29 15:59:29.713585] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:18.403 [2024-11-29 15:59:29.714159] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:18.403 [2024-11-29 15:59:29.714181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.403 [2024-11-29 15:59:29.714188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.403 [2024-11-29 15:59:29.714195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:17:18.403 [2024-11-29 15:59:29.714201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.403 [2024-11-29 15:59:29.715562] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:18.403 [2024-11-29 15:59:29.725078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.403 [2024-11-29 15:59:29.725108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:18.403 [2024-11-29 15:59:29.725118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.517 ms 00:17:18.403 [2024-11-29 15:59:29.725124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.403 [2024-11-29 15:59:29.725211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.403 [2024-11-29 15:59:29.725219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:18.403 [2024-11-29 15:59:29.725227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:18.403 [2024-11-29 15:59:29.725234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.403 [2024-11-29 15:59:29.729457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.403 [2024-11-29 15:59:29.729481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.403 [2024-11-29 15:59:29.729488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.192 ms 00:17:18.403 [2024-11-29 15:59:29.729497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.403 [2024-11-29 15:59:29.729582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.403 [2024-11-29 15:59:29.729590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.403 [2024-11-29 15:59:29.729597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:18.403 [2024-11-29 15:59:29.729602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.403 [2024-11-29 15:59:29.729623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.403 [2024-11-29 15:59:29.729629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:18.403 [2024-11-29 15:59:29.729635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:18.403 [2024-11-29 15:59:29.729640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.403 [2024-11-29 15:59:29.729664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:18.403 [2024-11-29 15:59:29.732420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.403 [2024-11-29 15:59:29.732446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.403 [2024-11-29 15:59:29.732453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:17:18.403 [2024-11-29 15:59:29.732461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.403 [2024-11-29 15:59:29.732491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.403 [2024-11-29 15:59:29.732498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:18.403 [2024-11-29 15:59:29.732504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:18.403 [2024-11-29 15:59:29.732510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.403 [2024-11-29 15:59:29.732524] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:18.403 [2024-11-29 15:59:29.732538] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:18.403 [2024-11-29 15:59:29.732563] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:18.403 [2024-11-29 15:59:29.732576] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:18.403 [2024-11-29 15:59:29.732633] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:18.403 [2024-11-29 15:59:29.732641] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:18.403 [2024-11-29 15:59:29.732649] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:18.403 [2024-11-29 15:59:29.732657] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:18.403 [2024-11-29 15:59:29.732664] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:18.403 [2024-11-29 15:59:29.732670] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:18.403 [2024-11-29 15:59:29.732675] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:18.403 [2024-11-29 15:59:29.732681] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:18.403 [2024-11-29 15:59:29.732689] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:18.403 [2024-11-29 15:59:29.732695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.403 [2024-11-29 15:59:29.732701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:18.403 [2024-11-29 15:59:29.732707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:17:18.403 [2024-11-29 15:59:29.732713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.403 [2024-11-29 15:59:29.732764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.403 [2024-11-29 15:59:29.732770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:18.403 [2024-11-29 15:59:29.732776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:18.404 [2024-11-29 15:59:29.732782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.404 [2024-11-29 15:59:29.732839] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:18.404 [2024-11-29 15:59:29.732846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:18.404 [2024-11-29 15:59:29.732852] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.404 [2024-11-29 15:59:29.732858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.404 [2024-11-29 15:59:29.732865] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:18.404 [2024-11-29 15:59:29.732871] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:18.404 [2024-11-29 15:59:29.732876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:18.404 [2024-11-29 15:59:29.732881] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:18.404 [2024-11-29 15:59:29.732887] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:18.404 [2024-11-29 15:59:29.732892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.404 [2024-11-29 15:59:29.732898] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:18.404 [2024-11-29 15:59:29.732903] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:18.404 [2024-11-29 15:59:29.732907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.404 [2024-11-29 15:59:29.732912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:18.404 [2024-11-29 15:59:29.732922] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:18.404 [2024-11-29 15:59:29.732927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.404 [2024-11-29 15:59:29.732932] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:18.404 [2024-11-29 15:59:29.732937] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:18.404 [2024-11-29 15:59:29.732943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.404 [2024-11-29 15:59:29.732948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:18.404 [2024-11-29 15:59:29.732953] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:18.404 [2024-11-29 15:59:29.732958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:18.404 [2024-11-29 15:59:29.732963] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:18.404 [2024-11-29 15:59:29.732968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:18.404 [2024-11-29 15:59:29.732995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:18.404 [2024-11-29 15:59:29.733000] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:18.404 [2024-11-29 15:59:29.733005] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:18.404 [2024-11-29 15:59:29.733010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:18.404 [2024-11-29 15:59:29.733015] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:18.404 [2024-11-29 15:59:29.733020] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:18.404 [2024-11-29 15:59:29.733025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:18.404 [2024-11-29 15:59:29.733030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:18.404 [2024-11-29 15:59:29.733034] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:18.404 [2024-11-29 15:59:29.733039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:18.404 [2024-11-29 15:59:29.733045] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:18.404 [2024-11-29 15:59:29.733050] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:18.404 [2024-11-29 15:59:29.733056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.404 [2024-11-29 15:59:29.733061] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:18.404 [2024-11-29 15:59:29.733066] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:18.404 [2024-11-29 15:59:29.733071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.404 [2024-11-29 15:59:29.733076] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:18.404 [2024-11-29 15:59:29.733082] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:18.404 [2024-11-29 15:59:29.733088] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.404 [2024-11-29 15:59:29.733095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.404 [2024-11-29 15:59:29.733101] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:18.404 [2024-11-29 15:59:29.733107] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:18.404 [2024-11-29 15:59:29.733112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:18.404 [2024-11-29 15:59:29.733117] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:18.404 [2024-11-29 15:59:29.733122] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:18.404 [2024-11-29 15:59:29.733128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:18.404 [2024-11-29 15:59:29.733133] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:18.404 [2024-11-29 15:59:29.733140] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.404 [2024-11-29 15:59:29.733147] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:18.404 [2024-11-29 15:59:29.733153] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:18.404 [2024-11-29 15:59:29.733158] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:18.404 [2024-11-29 15:59:29.733163] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:18.404 [2024-11-29 15:59:29.733169] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:18.404 [2024-11-29 15:59:29.733174] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:18.404 [2024-11-29 15:59:29.733179] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:18.404 [2024-11-29 15:59:29.733186] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:18.404 [2024-11-29 15:59:29.733191] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:18.404 [2024-11-29 15:59:29.733196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:18.404 [2024-11-29 15:59:29.733202] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:18.404 [2024-11-29 15:59:29.733207] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:18.404 [2024-11-29 15:59:29.733213] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:18.404 [2024-11-29 15:59:29.733219] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:18.404 [2024-11-29 15:59:29.733228] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.404 [2024-11-29 15:59:29.733234] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:18.404 [2024-11-29 15:59:29.733241] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:18.404 [2024-11-29 15:59:29.733247] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:18.404 [2024-11-29 15:59:29.733253] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:18.404 [2024-11-29 15:59:29.733258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.404 [2024-11-29 15:59:29.733264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:18.404 [2024-11-29 15:59:29.733270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:17:18.404 [2024-11-29 15:59:29.733275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.404 [2024-11-29 15:59:29.745488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.404 [2024-11-29 15:59:29.745519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.404 [2024-11-29 15:59:29.745530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.180 ms 00:17:18.404 [2024-11-29 15:59:29.745537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.404 [2024-11-29 15:59:29.745627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.404 [2024-11-29 15:59:29.745634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:18.404 [2024-11-29 15:59:29.745640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:18.404 [2024-11-29 15:59:29.745646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.404 [2024-11-29 15:59:29.780061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.404 [2024-11-29 15:59:29.780094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:18.404 [2024-11-29 15:59:29.780104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.398 ms 00:17:18.404 [2024-11-29 15:59:29.780111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.404 [2024-11-29 15:59:29.780168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.404 [2024-11-29 15:59:29.780176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.404 [2024-11-29 15:59:29.780186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:18.404 [2024-11-29 15:59:29.780192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.404 [2024-11-29 15:59:29.780471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.404 [2024-11-29 15:59:29.780484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.404 [2024-11-29 15:59:29.780491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:18.404 [2024-11-29 15:59:29.780497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.404 [2024-11-29 15:59:29.780590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.404 [2024-11-29 15:59:29.780610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.404 [2024-11-29 15:59:29.780616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:18.404 [2024-11-29 15:59:29.780621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.404 [2024-11-29 15:59:29.791872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.405 [2024-11-29 15:59:29.791899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.405 [2024-11-29 15:59:29.791907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.232 ms 00:17:18.405 [2024-11-29 15:59:29.791914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.405 [2024-11-29 15:59:29.801754] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:18.405 [2024-11-29 15:59:29.801781] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:18.405 [2024-11-29 15:59:29.801789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.405 [2024-11-29 15:59:29.801795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:18.405 [2024-11-29 15:59:29.801801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.783 ms 00:17:18.405 [2024-11-29 15:59:29.801807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.405 [2024-11-29 15:59:29.820179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.405 [2024-11-29 15:59:29.820209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:18.405 [2024-11-29 15:59:29.820217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.328 ms 00:17:18.405 [2024-11-29 15:59:29.820224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.405 [2024-11-29 15:59:29.829232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.405 [2024-11-29 15:59:29.829258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:18.405 [2024-11-29 15:59:29.829271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.958 ms 00:17:18.405 [2024-11-29 15:59:29.829276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.663 [2024-11-29 15:59:29.838170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.663 [2024-11-29 15:59:29.838196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:18.663 [2024-11-29 15:59:29.838203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.855 ms 00:17:18.663 [2024-11-29 15:59:29.838209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.663 [2024-11-29 15:59:29.838473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.663 [2024-11-29 15:59:29.838489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:18.663 [2024-11-29 15:59:29.838497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:17:18.663 [2024-11-29 15:59:29.838504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.663 [2024-11-29 15:59:29.883310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.663 [2024-11-29 15:59:29.883346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:18.663 [2024-11-29 15:59:29.883357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.788 ms 00:17:18.663 [2024-11-29 15:59:29.883366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.663 [2024-11-29 15:59:29.891212] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:18.663 [2024-11-29 15:59:29.902624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.663 [2024-11-29 15:59:29.902652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:18.663 [2024-11-29 15:59:29.902662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.190 ms 00:17:18.663 [2024-11-29 15:59:29.902668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:29.902721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:29.902728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:18.664 [2024-11-29 15:59:29.902736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:18.664 [2024-11-29 15:59:29.902742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:29.902779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:29.902785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:18.664 [2024-11-29 15:59:29.902791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:18.664 [2024-11-29 15:59:29.902796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:29.903713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:29.903748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:18.664 [2024-11-29 15:59:29.903755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:17:18.664 [2024-11-29 15:59:29.903761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:29.903785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:29.903795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:18.664 [2024-11-29 15:59:29.903802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:18.664 [2024-11-29 15:59:29.903808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:29.903832] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:18.664 [2024-11-29 15:59:29.903839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:29.903845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:18.664 [2024-11-29 15:59:29.903851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:18.664 [2024-11-29 15:59:29.903857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:29.921593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:29.921622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:18.664 [2024-11-29 15:59:29.921631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.721 ms 00:17:18.664 [2024-11-29 15:59:29.921638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:29.921711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:29.921719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:18.664 [2024-11-29 15:59:29.921725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:18.664 [2024-11-29 15:59:29.921730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:29.922336] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:18.664 [2024-11-29 15:59:29.924726] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 210.717 ms, result 0 00:17:18.664 [2024-11-29 15:59:29.925578] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:18.664 [2024-11-29 15:59:29.940807] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:18.664  [2024-11-29T15:59:30.095Z] Copying: 4096/4096 [kB] (average 48 MBps)[2024-11-29 15:59:30.026892] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:18.664 [2024-11-29 15:59:30.033572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:30.033604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:18.664 [2024-11-29 15:59:30.033613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:18.664 [2024-11-29 15:59:30.033619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:30.033636] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:18.664 [2024-11-29 15:59:30.035725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:30.035748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:18.664 [2024-11-29 15:59:30.035757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.079 ms 00:17:18.664 [2024-11-29 15:59:30.035764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:30.037065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:30.037090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:18.664 [2024-11-29 15:59:30.037098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.274 ms 00:17:18.664 [2024-11-29 15:59:30.037107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:30.040092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:30.040114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:18.664 [2024-11-29 15:59:30.040121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:17:18.664 [2024-11-29 15:59:30.040126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:30.045549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:30.045571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:18.664 [2024-11-29 15:59:30.045578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.402 ms 00:17:18.664 [2024-11-29 15:59:30.045588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:30.063179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:30.063205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:18.664 [2024-11-29 15:59:30.063213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.559 ms 00:17:18.664 [2024-11-29 15:59:30.063218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:30.074684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:30.074718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:18.664 [2024-11-29 15:59:30.074729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.422 ms 00:17:18.664 [2024-11-29 15:59:30.074736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.664 [2024-11-29 15:59:30.074853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.664 [2024-11-29 15:59:30.074861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:18.664 [2024-11-29 15:59:30.074868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:18.664 [2024-11-29 15:59:30.074874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.924 [2024-11-29 15:59:30.092881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.924 [2024-11-29 15:59:30.092907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:18.924 [2024-11-29 15:59:30.092915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.994 ms 00:17:18.924 [2024-11-29 15:59:30.092921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.924 [2024-11-29 15:59:30.110695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.924 [2024-11-29 15:59:30.110720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:18.924 [2024-11-29 15:59:30.110728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.738 ms 00:17:18.924 [2024-11-29 15:59:30.110733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.924 [2024-11-29 15:59:30.127751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.924 [2024-11-29 15:59:30.127778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:18.924 [2024-11-29 15:59:30.127786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.985 ms 00:17:18.924 [2024-11-29 15:59:30.127791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.924 [2024-11-29 15:59:30.145035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.924 [2024-11-29 15:59:30.145059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:18.924 [2024-11-29 15:59:30.145067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.190 ms 00:17:18.924 [2024-11-29 15:59:30.145073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.924 [2024-11-29 15:59:30.145105] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:18.924 [2024-11-29 15:59:30.145117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:18.924 [2024-11-29 15:59:30.145368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:18.925 [2024-11-29 15:59:30.145717] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:18.925 [2024-11-29 15:59:30.145723] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 08798bdb-4e1e-4c21-9160-7bdd4bf640f7 00:17:18.925 [2024-11-29 15:59:30.145729] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:18.925 [2024-11-29 15:59:30.145735] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:18.925 [2024-11-29 15:59:30.145740] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:18.925 [2024-11-29 15:59:30.145746] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:18.925 [2024-11-29 15:59:30.145754] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:18.925 [2024-11-29 15:59:30.145760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:18.925 [2024-11-29 15:59:30.145765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:18.925 [2024-11-29 15:59:30.145770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:18.925 [2024-11-29 15:59:30.145774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:18.925 [2024-11-29 15:59:30.145779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.925 [2024-11-29 15:59:30.145785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:18.925 [2024-11-29 15:59:30.145791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:17:18.925 [2024-11-29 15:59:30.145797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.925 [2024-11-29 15:59:30.155098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.925 [2024-11-29 15:59:30.155121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:18.925 [2024-11-29 15:59:30.155132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.287 ms 00:17:18.925 [2024-11-29 15:59:30.155137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.925 [2024-11-29 15:59:30.155295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.925 [2024-11-29 15:59:30.155311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:18.925 [2024-11-29 15:59:30.155317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:17:18.925 [2024-11-29 15:59:30.155322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.925 [2024-11-29 15:59:30.184736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.925 [2024-11-29 15:59:30.184763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.925 [2024-11-29 15:59:30.184774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.925 [2024-11-29 15:59:30.184780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.925 [2024-11-29 15:59:30.184839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.925 [2024-11-29 15:59:30.184845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.925 [2024-11-29 15:59:30.184851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.925 [2024-11-29 15:59:30.184856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.925 [2024-11-29 15:59:30.184889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.925 [2024-11-29 15:59:30.184895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.925 [2024-11-29 15:59:30.184901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.925 [2024-11-29 15:59:30.184910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.925 [2024-11-29 15:59:30.184923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.925 [2024-11-29 15:59:30.184930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.925 [2024-11-29 15:59:30.184936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.926 [2024-11-29 15:59:30.184941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.926 [2024-11-29 15:59:30.241951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.926 [2024-11-29 15:59:30.241987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:18.926 [2024-11-29 15:59:30.241998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.926 [2024-11-29 15:59:30.242004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.926 [2024-11-29 15:59:30.264373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.926 [2024-11-29 15:59:30.264400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.926 [2024-11-29 15:59:30.264407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.926 [2024-11-29 15:59:30.264413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.926 [2024-11-29 15:59:30.264449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.926 [2024-11-29 15:59:30.264456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.926 [2024-11-29 15:59:30.264462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.926 [2024-11-29 15:59:30.264467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.926 [2024-11-29 15:59:30.264493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.926 [2024-11-29 15:59:30.264500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.926 [2024-11-29 15:59:30.264505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.926 [2024-11-29 15:59:30.264511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.926 [2024-11-29 15:59:30.264580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.926 [2024-11-29 15:59:30.264589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.926 [2024-11-29 15:59:30.264595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.926 [2024-11-29 15:59:30.264601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.926 [2024-11-29 15:59:30.264626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.926 [2024-11-29 15:59:30.264633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:18.926 [2024-11-29 15:59:30.264639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.926 [2024-11-29 15:59:30.264645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.926 [2024-11-29 15:59:30.264673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.926 [2024-11-29 15:59:30.264681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.926 [2024-11-29 15:59:30.264687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.926 [2024-11-29 15:59:30.264693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.926 [2024-11-29 15:59:30.264731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.926 [2024-11-29 15:59:30.264741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.926 [2024-11-29 15:59:30.264746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.926 [2024-11-29 15:59:30.264752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.926 [2024-11-29 15:59:30.264857] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 231.271 ms, result 0 00:17:19.494 00:17:19.494 00:17:19.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.494 15:59:30 -- ftl/trim.sh@93 -- # svcpid=72474 00:17:19.494 15:59:30 -- ftl/trim.sh@94 -- # waitforlisten 72474 00:17:19.494 15:59:30 -- common/autotest_common.sh@829 -- # '[' -z 72474 ']' 00:17:19.494 15:59:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.494 15:59:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.494 15:59:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.494 15:59:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.494 15:59:30 -- common/autotest_common.sh@10 -- # set +x 00:17:19.495 15:59:30 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:19.754 [2024-11-29 15:59:30.987419] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:19.754 [2024-11-29 15:59:30.987536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72474 ] 00:17:19.754 [2024-11-29 15:59:31.134495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.012 [2024-11-29 15:59:31.276457] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:20.012 [2024-11-29 15:59:31.276610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.579 15:59:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.579 15:59:31 -- common/autotest_common.sh@862 -- # return 0 00:17:20.579 15:59:31 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:20.579 [2024-11-29 15:59:31.974477] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:20.579 [2024-11-29 15:59:31.974522] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:20.839 [2024-11-29 15:59:32.132410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.839 [2024-11-29 15:59:32.132448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:20.839 [2024-11-29 15:59:32.132459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:20.839 [2024-11-29 15:59:32.132465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.839 [2024-11-29 15:59:32.134483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.839 [2024-11-29 15:59:32.134512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.839 [2024-11-29 15:59:32.134521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.002 ms 00:17:20.839 [2024-11-29 15:59:32.134527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.839 [2024-11-29 15:59:32.134584] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:20.839 [2024-11-29 15:59:32.135133] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:20.839 [2024-11-29 15:59:32.135158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.839 [2024-11-29 15:59:32.135164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.839 [2024-11-29 15:59:32.135172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:17:20.839 [2024-11-29 15:59:32.135178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.839 [2024-11-29 15:59:32.136182] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:20.839 [2024-11-29 15:59:32.145791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.839 [2024-11-29 15:59:32.145824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:20.839 [2024-11-29 15:59:32.145833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.614 ms 00:17:20.839 [2024-11-29 15:59:32.145841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.839 [2024-11-29 15:59:32.145900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.839 [2024-11-29 15:59:32.145911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:20.839 [2024-11-29 15:59:32.145917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:20.839 [2024-11-29 15:59:32.145924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.839 [2024-11-29 15:59:32.150195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.839 [2024-11-29 15:59:32.150225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.839 [2024-11-29 15:59:32.150233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.234 ms 00:17:20.839 [2024-11-29 15:59:32.150240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.839 [2024-11-29 15:59:32.150309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.839 [2024-11-29 15:59:32.150319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.839 [2024-11-29 15:59:32.150325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:20.839 [2024-11-29 15:59:32.150332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.839 [2024-11-29 15:59:32.150351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.839 [2024-11-29 15:59:32.150359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:20.839 [2024-11-29 15:59:32.150364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:20.839 [2024-11-29 15:59:32.150372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.839 [2024-11-29 15:59:32.150393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:20.839 [2024-11-29 15:59:32.153140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.839 [2024-11-29 15:59:32.153165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.839 [2024-11-29 15:59:32.153174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.752 ms 00:17:20.839 [2024-11-29 15:59:32.153180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.839 [2024-11-29 15:59:32.153211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.839 [2024-11-29 15:59:32.153219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:20.839 [2024-11-29 15:59:32.153226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:20.839 [2024-11-29 15:59:32.153233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.839 [2024-11-29 15:59:32.153249] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:20.839 [2024-11-29 15:59:32.153263] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:20.839 [2024-11-29 15:59:32.153290] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:20.839 [2024-11-29 15:59:32.153301] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:20.839 [2024-11-29 15:59:32.153358] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:20.839 [2024-11-29 15:59:32.153366] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:20.839 [2024-11-29 15:59:32.153379] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:20.839 [2024-11-29 15:59:32.153386] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:20.839 [2024-11-29 15:59:32.153394] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:20.839 [2024-11-29 15:59:32.153400] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:20.839 [2024-11-29 15:59:32.153407] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:20.839 [2024-11-29 15:59:32.153413] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:20.839 [2024-11-29 15:59:32.153421] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:20.839 [2024-11-29 15:59:32.153427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.839 [2024-11-29 15:59:32.153434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:20.839 [2024-11-29 15:59:32.153440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:17:20.839 [2024-11-29 15:59:32.153447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.839 [2024-11-29 15:59:32.153497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.839 [2024-11-29 15:59:32.153503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:20.839 [2024-11-29 15:59:32.153509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:20.839 [2024-11-29 15:59:32.153515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.839 [2024-11-29 15:59:32.153572] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:20.839 [2024-11-29 15:59:32.153585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:20.839 [2024-11-29 15:59:32.153592] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:20.839 [2024-11-29 15:59:32.153600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:20.839 [2024-11-29 15:59:32.153606] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:20.839 [2024-11-29 15:59:32.153612] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:20.839 [2024-11-29 15:59:32.153618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:20.839 [2024-11-29 15:59:32.153626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:20.839 [2024-11-29 15:59:32.153632] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:20.839 [2024-11-29 15:59:32.153638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:20.839 [2024-11-29 15:59:32.153643] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:20.839 [2024-11-29 15:59:32.153650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:20.839 [2024-11-29 15:59:32.153656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:20.839 [2024-11-29 15:59:32.153662] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:20.839 [2024-11-29 15:59:32.153668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:20.839 [2024-11-29 15:59:32.153674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:20.839 [2024-11-29 15:59:32.153679] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:20.839 [2024-11-29 15:59:32.153693] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:20.839 [2024-11-29 15:59:32.153697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:20.839 [2024-11-29 15:59:32.153704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:20.839 [2024-11-29 15:59:32.153708] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:20.839 [2024-11-29 15:59:32.153715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:20.839 [2024-11-29 15:59:32.153721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:20.839 [2024-11-29 15:59:32.153728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:20.839 [2024-11-29 15:59:32.153733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:20.839 [2024-11-29 15:59:32.153743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:20.839 [2024-11-29 15:59:32.153748] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:20.839 [2024-11-29 15:59:32.153754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:20.839 [2024-11-29 15:59:32.153760] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:20.839 [2024-11-29 15:59:32.153766] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:20.839 [2024-11-29 15:59:32.153770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:20.839 [2024-11-29 15:59:32.153777] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:20.839 [2024-11-29 15:59:32.153782] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:20.839 [2024-11-29 15:59:32.153788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:20.839 [2024-11-29 15:59:32.153792] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:20.840 [2024-11-29 15:59:32.153798] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:20.840 [2024-11-29 15:59:32.153803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:20.840 [2024-11-29 15:59:32.153810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:20.840 [2024-11-29 15:59:32.153815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:20.840 [2024-11-29 15:59:32.153822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:20.840 [2024-11-29 15:59:32.153827] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:20.840 [2024-11-29 15:59:32.153835] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:20.840 [2024-11-29 15:59:32.153840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:20.840 [2024-11-29 15:59:32.153847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:20.840 [2024-11-29 15:59:32.153852] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:20.840 [2024-11-29 15:59:32.153858] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:20.840 [2024-11-29 15:59:32.153864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:20.840 [2024-11-29 15:59:32.153871] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:20.840 [2024-11-29 15:59:32.153875] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:20.840 [2024-11-29 15:59:32.153881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:20.840 [2024-11-29 15:59:32.153887] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:20.840 [2024-11-29 15:59:32.153895] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:20.840 [2024-11-29 15:59:32.153901] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:20.840 [2024-11-29 15:59:32.153908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:20.840 [2024-11-29 15:59:32.153914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:20.840 [2024-11-29 15:59:32.153923] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:20.840 [2024-11-29 15:59:32.153928] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:20.840 [2024-11-29 15:59:32.153935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:20.840 [2024-11-29 15:59:32.153941] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:20.840 [2024-11-29 15:59:32.153947] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:20.840 [2024-11-29 15:59:32.153952] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:20.840 [2024-11-29 15:59:32.153959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:20.840 [2024-11-29 15:59:32.153964] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:20.840 [2024-11-29 15:59:32.153980] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:20.840 [2024-11-29 15:59:32.153987] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:20.840 [2024-11-29 15:59:32.153993] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:20.840 [2024-11-29 15:59:32.153999] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:20.840 [2024-11-29 15:59:32.154006] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:20.840 [2024-11-29 15:59:32.154012] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:20.840 [2024-11-29 15:59:32.154019] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:20.840 [2024-11-29 15:59:32.154024] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:20.840 [2024-11-29 15:59:32.154032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.840 [2024-11-29 15:59:32.154038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:20.840 [2024-11-29 15:59:32.154044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:17:20.840 [2024-11-29 15:59:32.154050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.840 [2024-11-29 15:59:32.165880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.840 [2024-11-29 15:59:32.165907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.840 [2024-11-29 15:59:32.165918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.792 ms 00:17:20.840 [2024-11-29 15:59:32.165926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.840 [2024-11-29 15:59:32.166023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.840 [2024-11-29 15:59:32.166031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:20.840 [2024-11-29 15:59:32.166038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:20.840 [2024-11-29 15:59:32.166044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.840 [2024-11-29 15:59:32.189968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.840 [2024-11-29 15:59:32.190002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.840 [2024-11-29 15:59:32.190012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.908 ms 00:17:20.840 [2024-11-29 15:59:32.190019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.840 [2024-11-29 15:59:32.190063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.840 [2024-11-29 15:59:32.190072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.840 [2024-11-29 15:59:32.190080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:20.840 [2024-11-29 15:59:32.190087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.840 [2024-11-29 15:59:32.190361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.840 [2024-11-29 15:59:32.190372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.840 [2024-11-29 15:59:32.190382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:17:20.840 [2024-11-29 15:59:32.190388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.840 [2024-11-29 15:59:32.190478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.840 [2024-11-29 15:59:32.190512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.840 [2024-11-29 15:59:32.190522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:20.840 [2024-11-29 15:59:32.190527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.840 [2024-11-29 15:59:32.202312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.840 [2024-11-29 15:59:32.202336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.840 [2024-11-29 15:59:32.202346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.768 ms 00:17:20.840 [2024-11-29 15:59:32.202352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.840 [2024-11-29 15:59:32.211984] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:20.840 [2024-11-29 15:59:32.212012] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:20.840 [2024-11-29 15:59:32.212022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.840 [2024-11-29 15:59:32.212028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:20.840 [2024-11-29 15:59:32.212035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.593 ms 00:17:20.840 [2024-11-29 15:59:32.212041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.840 [2024-11-29 15:59:32.230358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.840 [2024-11-29 15:59:32.230386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:20.840 [2024-11-29 15:59:32.230397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.272 ms 00:17:20.840 [2024-11-29 15:59:32.230403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.840 [2024-11-29 15:59:32.239163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.840 [2024-11-29 15:59:32.239191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:20.840 [2024-11-29 15:59:32.239200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.708 ms 00:17:20.840 [2024-11-29 15:59:32.239205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.840 [2024-11-29 15:59:32.247960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.840 [2024-11-29 15:59:32.247991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:20.840 [2024-11-29 15:59:32.248001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.713 ms 00:17:20.840 [2024-11-29 15:59:32.248007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.840 [2024-11-29 15:59:32.248277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.840 [2024-11-29 15:59:32.248291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:20.840 [2024-11-29 15:59:32.248301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:17:20.840 [2024-11-29 15:59:32.248307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.099 [2024-11-29 15:59:32.293431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.099 [2024-11-29 15:59:32.293463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:21.099 [2024-11-29 15:59:32.293476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.104 ms 00:17:21.099 [2024-11-29 15:59:32.293482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.099 [2024-11-29 15:59:32.301259] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:21.099 [2024-11-29 15:59:32.312616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.099 [2024-11-29 15:59:32.312647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:21.099 [2024-11-29 15:59:32.312656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.075 ms 00:17:21.099 [2024-11-29 15:59:32.312664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.099 [2024-11-29 15:59:32.312715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.099 [2024-11-29 15:59:32.312725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:21.099 [2024-11-29 15:59:32.312733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:21.099 [2024-11-29 15:59:32.312740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.099 [2024-11-29 15:59:32.312777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.099 [2024-11-29 15:59:32.312785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:21.099 [2024-11-29 15:59:32.312791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:21.099 [2024-11-29 15:59:32.312798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.099 [2024-11-29 15:59:32.313722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.099 [2024-11-29 15:59:32.313761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:21.099 [2024-11-29 15:59:32.313768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:17:21.099 [2024-11-29 15:59:32.313777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.099 [2024-11-29 15:59:32.313802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.099 [2024-11-29 15:59:32.313811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:21.099 [2024-11-29 15:59:32.313816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:21.099 [2024-11-29 15:59:32.313823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.099 [2024-11-29 15:59:32.313850] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:21.099 [2024-11-29 15:59:32.313860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.099 [2024-11-29 15:59:32.313867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:21.099 [2024-11-29 15:59:32.313873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:21.099 [2024-11-29 15:59:32.313879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.099 [2024-11-29 15:59:32.331881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.099 [2024-11-29 15:59:32.331903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:21.099 [2024-11-29 15:59:32.331913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.981 ms 00:17:21.099 [2024-11-29 15:59:32.331920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.099 [2024-11-29 15:59:32.331994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.099 [2024-11-29 15:59:32.332003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:21.099 [2024-11-29 15:59:32.332010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:21.099 [2024-11-29 15:59:32.332018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.099 [2024-11-29 15:59:32.332669] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:21.099 [2024-11-29 15:59:32.335084] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 200.047 ms, result 0 00:17:21.099 [2024-11-29 15:59:32.336023] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:21.099 Some configs were skipped because the RPC state that can call them passed over. 00:17:21.099 15:59:32 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:21.358 [2024-11-29 15:59:32.553768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.358 [2024-11-29 15:59:32.553806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:21.358 [2024-11-29 15:59:32.553816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.867 ms 00:17:21.358 [2024-11-29 15:59:32.553825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.358 [2024-11-29 15:59:32.553852] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.952 ms, result 0 00:17:21.358 true 00:17:21.358 15:59:32 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:21.358 [2024-11-29 15:59:32.764540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.358 [2024-11-29 15:59:32.764571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:21.358 [2024-11-29 15:59:32.764581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.534 ms 00:17:21.358 [2024-11-29 15:59:32.764587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.358 [2024-11-29 15:59:32.764615] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.608 ms, result 0 00:17:21.358 true 00:17:21.358 15:59:32 -- ftl/trim.sh@102 -- # killprocess 72474 00:17:21.358 15:59:32 -- common/autotest_common.sh@936 -- # '[' -z 72474 ']' 00:17:21.358 15:59:32 -- common/autotest_common.sh@940 -- # kill -0 72474 00:17:21.358 15:59:32 -- common/autotest_common.sh@941 -- # uname 00:17:21.358 15:59:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.617 15:59:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72474 00:17:21.617 killing process with pid 72474 00:17:21.617 15:59:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:21.617 15:59:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:21.617 15:59:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72474' 00:17:21.617 15:59:32 -- common/autotest_common.sh@955 -- # kill 72474 00:17:21.617 15:59:32 -- common/autotest_common.sh@960 -- # wait 72474 00:17:22.185 [2024-11-29 15:59:33.318257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.185 [2024-11-29 15:59:33.318307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:22.185 [2024-11-29 15:59:33.318318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:22.185 [2024-11-29 15:59:33.318327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.185 [2024-11-29 15:59:33.318345] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:22.185 [2024-11-29 15:59:33.320285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.185 [2024-11-29 15:59:33.320311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:22.185 [2024-11-29 15:59:33.320322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.926 ms 00:17:22.185 [2024-11-29 15:59:33.320328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.185 [2024-11-29 15:59:33.320569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.185 [2024-11-29 15:59:33.320584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:22.185 [2024-11-29 15:59:33.320592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:17:22.185 [2024-11-29 15:59:33.320597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.185 [2024-11-29 15:59:33.323733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.185 [2024-11-29 15:59:33.323760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:22.185 [2024-11-29 15:59:33.323768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.119 ms 00:17:22.185 [2024-11-29 15:59:33.323774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.185 [2024-11-29 15:59:33.329167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.185 [2024-11-29 15:59:33.329211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:22.185 [2024-11-29 15:59:33.329220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.367 ms 00:17:22.185 [2024-11-29 15:59:33.329227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.185 [2024-11-29 15:59:33.336631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.185 [2024-11-29 15:59:33.336659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:22.185 [2024-11-29 15:59:33.336669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.347 ms 00:17:22.185 [2024-11-29 15:59:33.336675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.185 [2024-11-29 15:59:33.343176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.185 [2024-11-29 15:59:33.343204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:22.185 [2024-11-29 15:59:33.343214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.471 ms 00:17:22.185 [2024-11-29 15:59:33.343221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.185 [2024-11-29 15:59:33.343328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.185 [2024-11-29 15:59:33.343337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:22.185 [2024-11-29 15:59:33.343345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:22.185 [2024-11-29 15:59:33.343351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.185 [2024-11-29 15:59:33.351584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.185 [2024-11-29 15:59:33.351608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:22.185 [2024-11-29 15:59:33.351616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.216 ms 00:17:22.185 [2024-11-29 15:59:33.351622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.185 [2024-11-29 15:59:33.359140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.185 [2024-11-29 15:59:33.359166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:22.185 [2024-11-29 15:59:33.359179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.489 ms 00:17:22.185 [2024-11-29 15:59:33.359184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.185 [2024-11-29 15:59:33.366284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.185 [2024-11-29 15:59:33.366307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:22.185 [2024-11-29 15:59:33.366315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.070 ms 00:17:22.185 [2024-11-29 15:59:33.366320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.185 [2024-11-29 15:59:33.373306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.185 [2024-11-29 15:59:33.373332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:22.185 [2024-11-29 15:59:33.373340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.935 ms 00:17:22.185 [2024-11-29 15:59:33.373346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.185 [2024-11-29 15:59:33.373373] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:22.185 [2024-11-29 15:59:33.373385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:22.185 [2024-11-29 15:59:33.373552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.373995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.374000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.374007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.374017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.374024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.374029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.374036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:22.186 [2024-11-29 15:59:33.374048] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:22.186 [2024-11-29 15:59:33.374057] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 08798bdb-4e1e-4c21-9160-7bdd4bf640f7 00:17:22.186 [2024-11-29 15:59:33.374063] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:22.186 [2024-11-29 15:59:33.374070] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:22.186 [2024-11-29 15:59:33.374075] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:22.186 [2024-11-29 15:59:33.374082] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:22.186 [2024-11-29 15:59:33.374087] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:22.186 [2024-11-29 15:59:33.374094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:22.186 [2024-11-29 15:59:33.374099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:22.186 [2024-11-29 15:59:33.374106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:22.186 [2024-11-29 15:59:33.374111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:22.186 [2024-11-29 15:59:33.374117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.186 [2024-11-29 15:59:33.374123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:22.186 [2024-11-29 15:59:33.374132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:17:22.186 [2024-11-29 15:59:33.374138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.186 [2024-11-29 15:59:33.383736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.186 [2024-11-29 15:59:33.383760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:22.186 [2024-11-29 15:59:33.383770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.581 ms 00:17:22.187 [2024-11-29 15:59:33.383776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.187 [2024-11-29 15:59:33.383942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.187 [2024-11-29 15:59:33.383957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:22.187 [2024-11-29 15:59:33.383965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:17:22.187 [2024-11-29 15:59:33.383983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.187 [2024-11-29 15:59:33.418520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.187 [2024-11-29 15:59:33.418545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.187 [2024-11-29 15:59:33.418554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.187 [2024-11-29 15:59:33.418560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.187 [2024-11-29 15:59:33.418619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.187 [2024-11-29 15:59:33.418628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.187 [2024-11-29 15:59:33.418635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.187 [2024-11-29 15:59:33.418641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.187 [2024-11-29 15:59:33.418674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.187 [2024-11-29 15:59:33.418682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.187 [2024-11-29 15:59:33.418691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.187 [2024-11-29 15:59:33.418696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.187 [2024-11-29 15:59:33.418712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.187 [2024-11-29 15:59:33.418717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.187 [2024-11-29 15:59:33.418726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.187 [2024-11-29 15:59:33.418731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.187 [2024-11-29 15:59:33.479036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.187 [2024-11-29 15:59:33.479068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.187 [2024-11-29 15:59:33.479078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.187 [2024-11-29 15:59:33.479085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.187 [2024-11-29 15:59:33.501922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.187 [2024-11-29 15:59:33.501951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.187 [2024-11-29 15:59:33.501962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.187 [2024-11-29 15:59:33.501968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.187 [2024-11-29 15:59:33.502023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.187 [2024-11-29 15:59:33.502031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.187 [2024-11-29 15:59:33.502040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.187 [2024-11-29 15:59:33.502045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.187 [2024-11-29 15:59:33.502071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.187 [2024-11-29 15:59:33.502077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.187 [2024-11-29 15:59:33.502084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.187 [2024-11-29 15:59:33.502091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.187 [2024-11-29 15:59:33.502163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.187 [2024-11-29 15:59:33.502172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.187 [2024-11-29 15:59:33.502179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.187 [2024-11-29 15:59:33.502184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.187 [2024-11-29 15:59:33.502211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.187 [2024-11-29 15:59:33.502217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:22.187 [2024-11-29 15:59:33.502224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.187 [2024-11-29 15:59:33.502229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.187 [2024-11-29 15:59:33.502260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.187 [2024-11-29 15:59:33.502266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.187 [2024-11-29 15:59:33.502275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.187 [2024-11-29 15:59:33.502280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.187 [2024-11-29 15:59:33.502317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.187 [2024-11-29 15:59:33.502324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.187 [2024-11-29 15:59:33.502331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.187 [2024-11-29 15:59:33.502338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.187 [2024-11-29 15:59:33.502444] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 184.171 ms, result 0 00:17:22.755 15:59:34 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:23.014 [2024-11-29 15:59:34.190758] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:23.014 [2024-11-29 15:59:34.190873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72522 ] 00:17:23.014 [2024-11-29 15:59:34.339604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.273 [2024-11-29 15:59:34.478091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.273 [2024-11-29 15:59:34.682504] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:23.273 [2024-11-29 15:59:34.682551] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:23.534 [2024-11-29 15:59:34.823908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.534 [2024-11-29 15:59:34.823947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:23.534 [2024-11-29 15:59:34.823957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:23.534 [2024-11-29 15:59:34.823963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.534 [2024-11-29 15:59:34.825989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.534 [2024-11-29 15:59:34.826018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.534 [2024-11-29 15:59:34.826025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:17:23.534 [2024-11-29 15:59:34.826031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.534 [2024-11-29 15:59:34.826087] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:23.534 [2024-11-29 15:59:34.826629] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:23.534 [2024-11-29 15:59:34.826650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.535 [2024-11-29 15:59:34.826656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.535 [2024-11-29 15:59:34.826663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:17:23.535 [2024-11-29 15:59:34.826668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.535 [2024-11-29 15:59:34.827611] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:23.535 [2024-11-29 15:59:34.837056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.535 [2024-11-29 15:59:34.837085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:23.535 [2024-11-29 15:59:34.837093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.446 ms 00:17:23.535 [2024-11-29 15:59:34.837099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.535 [2024-11-29 15:59:34.837163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.535 [2024-11-29 15:59:34.837172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:23.535 [2024-11-29 15:59:34.837178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:23.535 [2024-11-29 15:59:34.837183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.535 [2024-11-29 15:59:34.841461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.535 [2024-11-29 15:59:34.841487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.535 [2024-11-29 15:59:34.841493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.248 ms 00:17:23.535 [2024-11-29 15:59:34.841502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.535 [2024-11-29 15:59:34.841581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.535 [2024-11-29 15:59:34.841588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.535 [2024-11-29 15:59:34.841595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:23.535 [2024-11-29 15:59:34.841600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.535 [2024-11-29 15:59:34.841616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.535 [2024-11-29 15:59:34.841622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:23.535 [2024-11-29 15:59:34.841628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.535 [2024-11-29 15:59:34.841633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.535 [2024-11-29 15:59:34.841657] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:23.535 [2024-11-29 15:59:34.844389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.535 [2024-11-29 15:59:34.844413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.535 [2024-11-29 15:59:34.844420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.742 ms 00:17:23.535 [2024-11-29 15:59:34.844428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.535 [2024-11-29 15:59:34.844457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.535 [2024-11-29 15:59:34.844465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:23.535 [2024-11-29 15:59:34.844471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:23.535 [2024-11-29 15:59:34.844476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.535 [2024-11-29 15:59:34.844489] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:23.535 [2024-11-29 15:59:34.844503] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:23.535 [2024-11-29 15:59:34.844528] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:23.535 [2024-11-29 15:59:34.844541] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:23.535 [2024-11-29 15:59:34.844597] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:23.535 [2024-11-29 15:59:34.844606] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:23.535 [2024-11-29 15:59:34.844614] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:23.535 [2024-11-29 15:59:34.844621] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:23.535 [2024-11-29 15:59:34.844628] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:23.535 [2024-11-29 15:59:34.844633] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:23.535 [2024-11-29 15:59:34.844640] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:23.535 [2024-11-29 15:59:34.844646] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:23.535 [2024-11-29 15:59:34.844654] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:23.535 [2024-11-29 15:59:34.844661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.535 [2024-11-29 15:59:34.844666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:23.535 [2024-11-29 15:59:34.844672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:17:23.535 [2024-11-29 15:59:34.844678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.535 [2024-11-29 15:59:34.844728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.535 [2024-11-29 15:59:34.844735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:23.535 [2024-11-29 15:59:34.844740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:23.535 [2024-11-29 15:59:34.844746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.535 [2024-11-29 15:59:34.844801] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:23.535 [2024-11-29 15:59:34.844809] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:23.535 [2024-11-29 15:59:34.844816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.535 [2024-11-29 15:59:34.844822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.535 [2024-11-29 15:59:34.844827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:23.535 [2024-11-29 15:59:34.844832] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:23.535 [2024-11-29 15:59:34.844837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:23.535 [2024-11-29 15:59:34.844843] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:23.535 [2024-11-29 15:59:34.844849] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:23.535 [2024-11-29 15:59:34.844854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.535 [2024-11-29 15:59:34.844859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:23.535 [2024-11-29 15:59:34.844864] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:23.535 [2024-11-29 15:59:34.844870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.535 [2024-11-29 15:59:34.844876] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:23.535 [2024-11-29 15:59:34.844885] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:23.535 [2024-11-29 15:59:34.844890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.535 [2024-11-29 15:59:34.844896] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:23.535 [2024-11-29 15:59:34.844901] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:23.535 [2024-11-29 15:59:34.844906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.535 [2024-11-29 15:59:34.844911] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:23.535 [2024-11-29 15:59:34.844915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:23.535 [2024-11-29 15:59:34.844920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:23.535 [2024-11-29 15:59:34.844925] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:23.535 [2024-11-29 15:59:34.844930] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:23.535 [2024-11-29 15:59:34.844935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:23.535 [2024-11-29 15:59:34.844940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:23.535 [2024-11-29 15:59:34.844944] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:23.535 [2024-11-29 15:59:34.844949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:23.535 [2024-11-29 15:59:34.844955] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:23.535 [2024-11-29 15:59:34.844959] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:23.535 [2024-11-29 15:59:34.844964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:23.535 [2024-11-29 15:59:34.844980] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:23.535 [2024-11-29 15:59:34.844985] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:23.535 [2024-11-29 15:59:34.844990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:23.535 [2024-11-29 15:59:34.844995] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:23.535 [2024-11-29 15:59:34.845000] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:23.535 [2024-11-29 15:59:34.845005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.535 [2024-11-29 15:59:34.845010] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:23.535 [2024-11-29 15:59:34.845015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:23.535 [2024-11-29 15:59:34.845020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.535 [2024-11-29 15:59:34.845024] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:23.535 [2024-11-29 15:59:34.845032] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:23.535 [2024-11-29 15:59:34.845037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.535 [2024-11-29 15:59:34.845045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.535 [2024-11-29 15:59:34.845051] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:23.535 [2024-11-29 15:59:34.845057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:23.535 [2024-11-29 15:59:34.845062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:23.535 [2024-11-29 15:59:34.845068] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:23.535 [2024-11-29 15:59:34.845072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:23.535 [2024-11-29 15:59:34.845077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:23.535 [2024-11-29 15:59:34.845083] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:23.536 [2024-11-29 15:59:34.845090] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.536 [2024-11-29 15:59:34.845096] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:23.536 [2024-11-29 15:59:34.845102] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:23.536 [2024-11-29 15:59:34.845108] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:23.536 [2024-11-29 15:59:34.845113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:23.536 [2024-11-29 15:59:34.845118] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:23.536 [2024-11-29 15:59:34.845123] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:23.536 [2024-11-29 15:59:34.845128] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:23.536 [2024-11-29 15:59:34.845133] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:23.536 [2024-11-29 15:59:34.845138] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:23.536 [2024-11-29 15:59:34.845143] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:23.536 [2024-11-29 15:59:34.845150] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:23.536 [2024-11-29 15:59:34.845156] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:23.536 [2024-11-29 15:59:34.845161] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:23.536 [2024-11-29 15:59:34.845166] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:23.536 [2024-11-29 15:59:34.845175] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.536 [2024-11-29 15:59:34.845181] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:23.536 [2024-11-29 15:59:34.845187] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:23.536 [2024-11-29 15:59:34.845192] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:23.536 [2024-11-29 15:59:34.845197] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:23.536 [2024-11-29 15:59:34.845203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.536 [2024-11-29 15:59:34.845208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:23.536 [2024-11-29 15:59:34.845214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:17:23.536 [2024-11-29 15:59:34.845220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.536 [2024-11-29 15:59:34.857029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.536 [2024-11-29 15:59:34.857053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.536 [2024-11-29 15:59:34.857062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.777 ms 00:17:23.536 [2024-11-29 15:59:34.857067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.536 [2024-11-29 15:59:34.857155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.536 [2024-11-29 15:59:34.857162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:23.536 [2024-11-29 15:59:34.857168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:23.536 [2024-11-29 15:59:34.857174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.536 [2024-11-29 15:59:34.892190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.536 [2024-11-29 15:59:34.892223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.536 [2024-11-29 15:59:34.892233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.000 ms 00:17:23.536 [2024-11-29 15:59:34.892240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.536 [2024-11-29 15:59:34.892296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.536 [2024-11-29 15:59:34.892305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.536 [2024-11-29 15:59:34.892314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:23.536 [2024-11-29 15:59:34.892320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.536 [2024-11-29 15:59:34.892598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.536 [2024-11-29 15:59:34.892622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.536 [2024-11-29 15:59:34.892629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:23.536 [2024-11-29 15:59:34.892635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.536 [2024-11-29 15:59:34.892729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.536 [2024-11-29 15:59:34.892737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.536 [2024-11-29 15:59:34.892743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:23.536 [2024-11-29 15:59:34.892749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.536 [2024-11-29 15:59:34.903928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.536 [2024-11-29 15:59:34.903954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.536 [2024-11-29 15:59:34.903962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.161 ms 00:17:23.536 [2024-11-29 15:59:34.903987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.536 [2024-11-29 15:59:34.913659] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:23.536 [2024-11-29 15:59:34.913718] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:23.536 [2024-11-29 15:59:34.913727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.536 [2024-11-29 15:59:34.913733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:23.536 [2024-11-29 15:59:34.913740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.665 ms 00:17:23.536 [2024-11-29 15:59:34.913746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.536 [2024-11-29 15:59:34.932171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.536 [2024-11-29 15:59:34.932202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:23.536 [2024-11-29 15:59:34.932211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.329 ms 00:17:23.536 [2024-11-29 15:59:34.932218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.536 [2024-11-29 15:59:34.941051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.536 [2024-11-29 15:59:34.941077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:23.536 [2024-11-29 15:59:34.941090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.781 ms 00:17:23.536 [2024-11-29 15:59:34.941095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.536 [2024-11-29 15:59:34.949872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.536 [2024-11-29 15:59:34.949896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:23.536 [2024-11-29 15:59:34.949904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.738 ms 00:17:23.536 [2024-11-29 15:59:34.949909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.536 [2024-11-29 15:59:34.950185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.536 [2024-11-29 15:59:34.950201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:23.536 [2024-11-29 15:59:34.950208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:17:23.536 [2024-11-29 15:59:34.950215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.798 [2024-11-29 15:59:34.995116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.798 [2024-11-29 15:59:34.995156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:23.798 [2024-11-29 15:59:34.995165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.884 ms 00:17:23.798 [2024-11-29 15:59:34.995175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.798 [2024-11-29 15:59:35.003026] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:23.798 [2024-11-29 15:59:35.014207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.798 [2024-11-29 15:59:35.014235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:23.798 [2024-11-29 15:59:35.014245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.970 ms 00:17:23.798 [2024-11-29 15:59:35.014251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.798 [2024-11-29 15:59:35.014301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.798 [2024-11-29 15:59:35.014308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:23.798 [2024-11-29 15:59:35.014317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.798 [2024-11-29 15:59:35.014324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.798 [2024-11-29 15:59:35.014360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.798 [2024-11-29 15:59:35.014366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:23.798 [2024-11-29 15:59:35.014372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:23.798 [2024-11-29 15:59:35.014378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.798 [2024-11-29 15:59:35.015300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.798 [2024-11-29 15:59:35.015326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:23.798 [2024-11-29 15:59:35.015334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:17:23.798 [2024-11-29 15:59:35.015339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.798 [2024-11-29 15:59:35.015363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.798 [2024-11-29 15:59:35.015374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:23.798 [2024-11-29 15:59:35.015379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.798 [2024-11-29 15:59:35.015385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.798 [2024-11-29 15:59:35.015409] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:23.798 [2024-11-29 15:59:35.015416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.798 [2024-11-29 15:59:35.015421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:23.798 [2024-11-29 15:59:35.015427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:23.798 [2024-11-29 15:59:35.015433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.798 [2024-11-29 15:59:35.033292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.798 [2024-11-29 15:59:35.033319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:23.798 [2024-11-29 15:59:35.033327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.844 ms 00:17:23.798 [2024-11-29 15:59:35.033333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.798 [2024-11-29 15:59:35.033397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.798 [2024-11-29 15:59:35.033405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:23.798 [2024-11-29 15:59:35.033412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:23.798 [2024-11-29 15:59:35.033417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.798 [2024-11-29 15:59:35.034038] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.798 [2024-11-29 15:59:35.036481] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 209.898 ms, result 0 00:17:23.798 [2024-11-29 15:59:35.037172] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.798 [2024-11-29 15:59:35.052213] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:24.741  [2024-11-29T15:59:37.115Z] Copying: 22/256 [MB] (22 MBps) [2024-11-29T15:59:38.502Z] Copying: 43/256 [MB] (21 MBps) [2024-11-29T15:59:39.446Z] Copying: 61/256 [MB] (18 MBps) [2024-11-29T15:59:40.389Z] Copying: 79/256 [MB] (17 MBps) [2024-11-29T15:59:41.333Z] Copying: 98/256 [MB] (18 MBps) [2024-11-29T15:59:42.313Z] Copying: 119/256 [MB] (21 MBps) [2024-11-29T15:59:43.254Z] Copying: 143/256 [MB] (23 MBps) [2024-11-29T15:59:44.195Z] Copying: 165/256 [MB] (22 MBps) [2024-11-29T15:59:45.131Z] Copying: 188/256 [MB] (23 MBps) [2024-11-29T15:59:46.071Z] Copying: 227/256 [MB] (38 MBps) [2024-11-29T15:59:46.330Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-29 15:59:46.128529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:34.899 [2024-11-29 15:59:46.135997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.899 [2024-11-29 15:59:46.136039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:34.899 [2024-11-29 15:59:46.136051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:34.899 [2024-11-29 15:59:46.136057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.899 [2024-11-29 15:59:46.136079] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:34.899 [2024-11-29 15:59:46.138200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.899 [2024-11-29 15:59:46.138229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:34.899 [2024-11-29 15:59:46.138239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:17:34.899 [2024-11-29 15:59:46.138245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.899 [2024-11-29 15:59:46.138487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.899 [2024-11-29 15:59:46.138507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:34.899 [2024-11-29 15:59:46.138514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:17:34.899 [2024-11-29 15:59:46.138524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.899 [2024-11-29 15:59:46.141319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.899 [2024-11-29 15:59:46.141339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:34.899 [2024-11-29 15:59:46.141346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.782 ms 00:17:34.899 [2024-11-29 15:59:46.141353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.899 [2024-11-29 15:59:46.146544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.899 [2024-11-29 15:59:46.146570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:34.899 [2024-11-29 15:59:46.146578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.167 ms 00:17:34.899 [2024-11-29 15:59:46.146585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.899 [2024-11-29 15:59:46.166236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.899 [2024-11-29 15:59:46.166265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:34.899 [2024-11-29 15:59:46.166274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.608 ms 00:17:34.900 [2024-11-29 15:59:46.166280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.900 [2024-11-29 15:59:46.177413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.900 [2024-11-29 15:59:46.177442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:34.900 [2024-11-29 15:59:46.177452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.088 ms 00:17:34.900 [2024-11-29 15:59:46.177458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.900 [2024-11-29 15:59:46.177572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.900 [2024-11-29 15:59:46.177581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:34.900 [2024-11-29 15:59:46.177590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:34.900 [2024-11-29 15:59:46.177595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.900 [2024-11-29 15:59:46.195229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.900 [2024-11-29 15:59:46.195254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:34.900 [2024-11-29 15:59:46.195263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.621 ms 00:17:34.900 [2024-11-29 15:59:46.195269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.900 [2024-11-29 15:59:46.212570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.900 [2024-11-29 15:59:46.212595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:34.900 [2024-11-29 15:59:46.212603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.256 ms 00:17:34.900 [2024-11-29 15:59:46.212608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.900 [2024-11-29 15:59:46.229749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.900 [2024-11-29 15:59:46.229777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:34.900 [2024-11-29 15:59:46.229785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.105 ms 00:17:34.900 [2024-11-29 15:59:46.229790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.900 [2024-11-29 15:59:46.247123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.900 [2024-11-29 15:59:46.247148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:34.900 [2024-11-29 15:59:46.247156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.273 ms 00:17:34.900 [2024-11-29 15:59:46.247161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.900 [2024-11-29 15:59:46.247196] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:34.900 [2024-11-29 15:59:46.247209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:34.900 [2024-11-29 15:59:46.247612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:34.901 [2024-11-29 15:59:46.247800] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:34.901 [2024-11-29 15:59:46.247807] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 08798bdb-4e1e-4c21-9160-7bdd4bf640f7 00:17:34.901 [2024-11-29 15:59:46.247813] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:34.901 [2024-11-29 15:59:46.247819] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:34.901 [2024-11-29 15:59:46.247824] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:34.901 [2024-11-29 15:59:46.247830] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:34.901 [2024-11-29 15:59:46.247836] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:34.901 [2024-11-29 15:59:46.247843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:34.901 [2024-11-29 15:59:46.247849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:34.901 [2024-11-29 15:59:46.247854] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:34.901 [2024-11-29 15:59:46.247859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:34.901 [2024-11-29 15:59:46.247865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.901 [2024-11-29 15:59:46.247871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:34.901 [2024-11-29 15:59:46.247877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:17:34.901 [2024-11-29 15:59:46.247883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.901 [2024-11-29 15:59:46.257346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.901 [2024-11-29 15:59:46.257372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:34.901 [2024-11-29 15:59:46.257383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.450 ms 00:17:34.901 [2024-11-29 15:59:46.257389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.901 [2024-11-29 15:59:46.257553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.901 [2024-11-29 15:59:46.257571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:34.901 [2024-11-29 15:59:46.257577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:17:34.901 [2024-11-29 15:59:46.257583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.901 [2024-11-29 15:59:46.287092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.901 [2024-11-29 15:59:46.287119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:34.901 [2024-11-29 15:59:46.287129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.901 [2024-11-29 15:59:46.287135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.901 [2024-11-29 15:59:46.287194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.901 [2024-11-29 15:59:46.287202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:34.901 [2024-11-29 15:59:46.287208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.901 [2024-11-29 15:59:46.287214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.901 [2024-11-29 15:59:46.287245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.901 [2024-11-29 15:59:46.287253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:34.901 [2024-11-29 15:59:46.287259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.901 [2024-11-29 15:59:46.287267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.901 [2024-11-29 15:59:46.287281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.901 [2024-11-29 15:59:46.287288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:34.901 [2024-11-29 15:59:46.287294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.901 [2024-11-29 15:59:46.287299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.160 [2024-11-29 15:59:46.343878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.160 [2024-11-29 15:59:46.343909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.160 [2024-11-29 15:59:46.343920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.160 [2024-11-29 15:59:46.343926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.160 [2024-11-29 15:59:46.366415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.160 [2024-11-29 15:59:46.366445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.160 [2024-11-29 15:59:46.366453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.160 [2024-11-29 15:59:46.366459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.160 [2024-11-29 15:59:46.366497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.160 [2024-11-29 15:59:46.366504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.160 [2024-11-29 15:59:46.366510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.160 [2024-11-29 15:59:46.366515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.160 [2024-11-29 15:59:46.366543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.160 [2024-11-29 15:59:46.366549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.160 [2024-11-29 15:59:46.366554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.160 [2024-11-29 15:59:46.366561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.160 [2024-11-29 15:59:46.366629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.160 [2024-11-29 15:59:46.366637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.160 [2024-11-29 15:59:46.366643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.160 [2024-11-29 15:59:46.366649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.160 [2024-11-29 15:59:46.366674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.160 [2024-11-29 15:59:46.366681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:35.160 [2024-11-29 15:59:46.366686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.160 [2024-11-29 15:59:46.366692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.160 [2024-11-29 15:59:46.366720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.160 [2024-11-29 15:59:46.366727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.160 [2024-11-29 15:59:46.366733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.160 [2024-11-29 15:59:46.366738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.160 [2024-11-29 15:59:46.366774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.160 [2024-11-29 15:59:46.366784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.160 [2024-11-29 15:59:46.366790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.160 [2024-11-29 15:59:46.366796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.160 [2024-11-29 15:59:46.366900] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 230.908 ms, result 0 00:17:35.728 00:17:35.728 00:17:35.728 15:59:47 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:36.297 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:36.297 15:59:47 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:36.297 15:59:47 -- ftl/trim.sh@109 -- # fio_kill 00:17:36.297 15:59:47 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:36.297 15:59:47 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:36.297 15:59:47 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:36.297 15:59:47 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:36.297 15:59:47 -- ftl/trim.sh@20 -- # killprocess 72474 00:17:36.297 15:59:47 -- common/autotest_common.sh@936 -- # '[' -z 72474 ']' 00:17:36.297 Process with pid 72474 is not found 00:17:36.297 15:59:47 -- common/autotest_common.sh@940 -- # kill -0 72474 00:17:36.297 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72474) - No such process 00:17:36.297 15:59:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72474 is not found' 00:17:36.297 00:17:36.297 real 1m15.956s 00:17:36.297 user 1m31.663s 00:17:36.297 sys 0m15.820s 00:17:36.297 15:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:36.297 15:59:47 -- common/autotest_common.sh@10 -- # set +x 00:17:36.297 ************************************ 00:17:36.297 END TEST ftl_trim 00:17:36.297 ************************************ 00:17:36.297 15:59:47 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:36.297 15:59:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:36.297 15:59:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:36.297 15:59:47 -- common/autotest_common.sh@10 -- # set +x 00:17:36.297 ************************************ 00:17:36.297 START TEST ftl_restore 00:17:36.297 ************************************ 00:17:36.297 15:59:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:36.557 * Looking for test storage... 00:17:36.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:36.557 15:59:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:36.557 15:59:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:36.557 15:59:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:36.557 15:59:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:36.557 15:59:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:36.557 15:59:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:36.557 15:59:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:36.557 15:59:47 -- scripts/common.sh@335 -- # IFS=.-: 00:17:36.557 15:59:47 -- scripts/common.sh@335 -- # read -ra ver1 00:17:36.557 15:59:47 -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.557 15:59:47 -- scripts/common.sh@336 -- # read -ra ver2 00:17:36.557 15:59:47 -- scripts/common.sh@337 -- # local 'op=<' 00:17:36.557 15:59:47 -- scripts/common.sh@339 -- # ver1_l=2 00:17:36.557 15:59:47 -- scripts/common.sh@340 -- # ver2_l=1 00:17:36.557 15:59:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:36.557 15:59:47 -- scripts/common.sh@343 -- # case "$op" in 00:17:36.557 15:59:47 -- scripts/common.sh@344 -- # : 1 00:17:36.557 15:59:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:36.557 15:59:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.557 15:59:47 -- scripts/common.sh@364 -- # decimal 1 00:17:36.557 15:59:47 -- scripts/common.sh@352 -- # local d=1 00:17:36.557 15:59:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.557 15:59:47 -- scripts/common.sh@354 -- # echo 1 00:17:36.557 15:59:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:36.557 15:59:47 -- scripts/common.sh@365 -- # decimal 2 00:17:36.557 15:59:47 -- scripts/common.sh@352 -- # local d=2 00:17:36.557 15:59:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.557 15:59:47 -- scripts/common.sh@354 -- # echo 2 00:17:36.557 15:59:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:36.557 15:59:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:36.557 15:59:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:36.557 15:59:47 -- scripts/common.sh@367 -- # return 0 00:17:36.557 15:59:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.557 15:59:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:36.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.557 --rc genhtml_branch_coverage=1 00:17:36.557 --rc genhtml_function_coverage=1 00:17:36.557 --rc genhtml_legend=1 00:17:36.557 --rc geninfo_all_blocks=1 00:17:36.557 --rc geninfo_unexecuted_blocks=1 00:17:36.557 00:17:36.557 ' 00:17:36.557 15:59:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:36.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.557 --rc genhtml_branch_coverage=1 00:17:36.557 --rc genhtml_function_coverage=1 00:17:36.557 --rc genhtml_legend=1 00:17:36.557 --rc geninfo_all_blocks=1 00:17:36.557 --rc geninfo_unexecuted_blocks=1 00:17:36.557 00:17:36.557 ' 00:17:36.557 15:59:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:36.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.557 --rc genhtml_branch_coverage=1 00:17:36.557 --rc genhtml_function_coverage=1 00:17:36.557 --rc genhtml_legend=1 00:17:36.557 --rc geninfo_all_blocks=1 00:17:36.557 --rc geninfo_unexecuted_blocks=1 00:17:36.557 00:17:36.557 ' 00:17:36.557 15:59:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:36.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.557 --rc genhtml_branch_coverage=1 00:17:36.557 --rc genhtml_function_coverage=1 00:17:36.558 --rc genhtml_legend=1 00:17:36.558 --rc geninfo_all_blocks=1 00:17:36.558 --rc geninfo_unexecuted_blocks=1 00:17:36.558 00:17:36.558 ' 00:17:36.558 15:59:47 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:36.558 15:59:47 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:36.558 15:59:47 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:36.558 15:59:47 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:36.558 15:59:47 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:36.558 15:59:47 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:36.558 15:59:47 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.558 15:59:47 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:36.558 15:59:47 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:36.558 15:59:47 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.558 15:59:47 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.558 15:59:47 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:36.558 15:59:47 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:36.558 15:59:47 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:36.558 15:59:47 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:36.558 15:59:47 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:36.558 15:59:47 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:36.558 15:59:47 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.558 15:59:47 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.558 15:59:47 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:36.558 15:59:47 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:36.558 15:59:47 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:36.558 15:59:47 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:36.558 15:59:47 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:36.558 15:59:47 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:36.558 15:59:47 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:36.558 15:59:47 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:36.558 15:59:47 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:36.558 15:59:47 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:36.558 15:59:47 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.558 15:59:47 -- ftl/restore.sh@13 -- # mktemp -d 00:17:36.558 15:59:47 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.h5IJBBsn01 00:17:36.558 15:59:47 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:36.558 15:59:47 -- ftl/restore.sh@16 -- # case $opt in 00:17:36.558 15:59:47 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:17:36.558 15:59:47 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:36.558 15:59:47 -- ftl/restore.sh@23 -- # shift 2 00:17:36.558 15:59:47 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:17:36.558 15:59:47 -- ftl/restore.sh@25 -- # timeout=240 00:17:36.558 15:59:47 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:36.558 15:59:47 -- ftl/restore.sh@39 -- # svcpid=72735 00:17:36.558 15:59:47 -- ftl/restore.sh@41 -- # waitforlisten 72735 00:17:36.558 15:59:47 -- common/autotest_common.sh@829 -- # '[' -z 72735 ']' 00:17:36.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.558 15:59:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.558 15:59:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.558 15:59:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.558 15:59:47 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.558 15:59:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.558 15:59:47 -- common/autotest_common.sh@10 -- # set +x 00:17:36.558 [2024-11-29 15:59:47.944565] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:36.558 [2024-11-29 15:59:47.944694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72735 ] 00:17:36.818 [2024-11-29 15:59:48.092095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.078 [2024-11-29 15:59:48.276119] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:37.078 [2024-11-29 15:59:48.276330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.017 15:59:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.017 15:59:49 -- common/autotest_common.sh@862 -- # return 0 00:17:38.017 15:59:49 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:38.017 15:59:49 -- ftl/common.sh@54 -- # local name=nvme0 00:17:38.017 15:59:49 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:38.017 15:59:49 -- ftl/common.sh@56 -- # local size=103424 00:17:38.017 15:59:49 -- ftl/common.sh@59 -- # local base_bdev 00:17:38.017 15:59:49 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:38.277 15:59:49 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:38.277 15:59:49 -- ftl/common.sh@62 -- # local base_size 00:17:38.277 15:59:49 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:38.277 15:59:49 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:17:38.277 15:59:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:38.277 15:59:49 -- common/autotest_common.sh@1369 -- # local bs 00:17:38.277 15:59:49 -- common/autotest_common.sh@1370 -- # local nb 00:17:38.277 15:59:49 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:38.538 15:59:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:38.538 { 00:17:38.538 "name": "nvme0n1", 00:17:38.538 "aliases": [ 00:17:38.538 "057f7ae2-c591-4663-bc3b-4292b65cf153" 00:17:38.538 ], 00:17:38.538 "product_name": "NVMe disk", 00:17:38.538 "block_size": 4096, 00:17:38.538 "num_blocks": 1310720, 00:17:38.538 "uuid": "057f7ae2-c591-4663-bc3b-4292b65cf153", 00:17:38.538 "assigned_rate_limits": { 00:17:38.538 "rw_ios_per_sec": 0, 00:17:38.538 "rw_mbytes_per_sec": 0, 00:17:38.538 "r_mbytes_per_sec": 0, 00:17:38.538 "w_mbytes_per_sec": 0 00:17:38.538 }, 00:17:38.538 "claimed": true, 00:17:38.538 "claim_type": "read_many_write_one", 00:17:38.538 "zoned": false, 00:17:38.538 "supported_io_types": { 00:17:38.538 "read": true, 00:17:38.538 "write": true, 00:17:38.538 "unmap": true, 00:17:38.538 "write_zeroes": true, 00:17:38.538 "flush": true, 00:17:38.538 "reset": true, 00:17:38.538 "compare": true, 00:17:38.538 "compare_and_write": false, 00:17:38.538 "abort": true, 00:17:38.538 "nvme_admin": true, 00:17:38.538 "nvme_io": true 00:17:38.538 }, 00:17:38.538 "driver_specific": { 00:17:38.538 "nvme": [ 00:17:38.538 { 00:17:38.538 "pci_address": "0000:00:07.0", 00:17:38.538 "trid": { 00:17:38.538 "trtype": "PCIe", 00:17:38.538 "traddr": "0000:00:07.0" 00:17:38.538 }, 00:17:38.538 "ctrlr_data": { 00:17:38.538 "cntlid": 0, 00:17:38.538 "vendor_id": "0x1b36", 00:17:38.538 "model_number": "QEMU NVMe Ctrl", 00:17:38.538 "serial_number": "12341", 00:17:38.538 "firmware_revision": "8.0.0", 00:17:38.538 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:38.538 "oacs": { 00:17:38.538 "security": 0, 00:17:38.538 "format": 1, 00:17:38.538 "firmware": 0, 00:17:38.538 "ns_manage": 1 00:17:38.538 }, 00:17:38.538 "multi_ctrlr": false, 00:17:38.538 "ana_reporting": false 00:17:38.538 }, 00:17:38.538 "vs": { 00:17:38.538 "nvme_version": "1.4" 00:17:38.538 }, 00:17:38.538 "ns_data": { 00:17:38.538 "id": 1, 00:17:38.538 "can_share": false 00:17:38.538 } 00:17:38.538 } 00:17:38.538 ], 00:17:38.538 "mp_policy": "active_passive" 00:17:38.538 } 00:17:38.538 } 00:17:38.538 ]' 00:17:38.538 15:59:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:38.538 15:59:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:38.538 15:59:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:38.538 15:59:49 -- common/autotest_common.sh@1373 -- # nb=1310720 00:17:38.538 15:59:49 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:17:38.538 15:59:49 -- common/autotest_common.sh@1377 -- # echo 5120 00:17:38.538 15:59:49 -- ftl/common.sh@63 -- # base_size=5120 00:17:38.538 15:59:49 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:38.538 15:59:49 -- ftl/common.sh@67 -- # clear_lvols 00:17:38.538 15:59:49 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:38.538 15:59:49 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:38.798 15:59:50 -- ftl/common.sh@28 -- # stores=9ead3772-c5d6-4609-ba9f-c70920c68bfb 00:17:38.798 15:59:50 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:38.798 15:59:50 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9ead3772-c5d6-4609-ba9f-c70920c68bfb 00:17:39.059 15:59:50 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:39.318 15:59:50 -- ftl/common.sh@68 -- # lvs=98ccdb1f-c57f-40b2-8a47-adf8f53d3bff 00:17:39.318 15:59:50 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 98ccdb1f-c57f-40b2-8a47-adf8f53d3bff 00:17:39.577 15:59:50 -- ftl/restore.sh@43 -- # split_bdev=bc148325-e5d5-49ee-93ea-66b336cd2670 00:17:39.577 15:59:50 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:17:39.577 15:59:50 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 bc148325-e5d5-49ee-93ea-66b336cd2670 00:17:39.577 15:59:50 -- ftl/common.sh@35 -- # local name=nvc0 00:17:39.577 15:59:50 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:39.577 15:59:50 -- ftl/common.sh@37 -- # local base_bdev=bc148325-e5d5-49ee-93ea-66b336cd2670 00:17:39.577 15:59:50 -- ftl/common.sh@38 -- # local cache_size= 00:17:39.577 15:59:50 -- ftl/common.sh@41 -- # get_bdev_size bc148325-e5d5-49ee-93ea-66b336cd2670 00:17:39.577 15:59:50 -- common/autotest_common.sh@1367 -- # local bdev_name=bc148325-e5d5-49ee-93ea-66b336cd2670 00:17:39.577 15:59:50 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:39.577 15:59:50 -- common/autotest_common.sh@1369 -- # local bs 00:17:39.577 15:59:50 -- common/autotest_common.sh@1370 -- # local nb 00:17:39.577 15:59:50 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc148325-e5d5-49ee-93ea-66b336cd2670 00:17:39.577 15:59:50 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:39.577 { 00:17:39.577 "name": "bc148325-e5d5-49ee-93ea-66b336cd2670", 00:17:39.577 "aliases": [ 00:17:39.577 "lvs/nvme0n1p0" 00:17:39.577 ], 00:17:39.577 "product_name": "Logical Volume", 00:17:39.577 "block_size": 4096, 00:17:39.577 "num_blocks": 26476544, 00:17:39.577 "uuid": "bc148325-e5d5-49ee-93ea-66b336cd2670", 00:17:39.577 "assigned_rate_limits": { 00:17:39.577 "rw_ios_per_sec": 0, 00:17:39.577 "rw_mbytes_per_sec": 0, 00:17:39.577 "r_mbytes_per_sec": 0, 00:17:39.577 "w_mbytes_per_sec": 0 00:17:39.577 }, 00:17:39.577 "claimed": false, 00:17:39.577 "zoned": false, 00:17:39.577 "supported_io_types": { 00:17:39.577 "read": true, 00:17:39.577 "write": true, 00:17:39.577 "unmap": true, 00:17:39.577 "write_zeroes": true, 00:17:39.577 "flush": false, 00:17:39.577 "reset": true, 00:17:39.577 "compare": false, 00:17:39.577 "compare_and_write": false, 00:17:39.577 "abort": false, 00:17:39.577 "nvme_admin": false, 00:17:39.577 "nvme_io": false 00:17:39.577 }, 00:17:39.577 "driver_specific": { 00:17:39.577 "lvol": { 00:17:39.577 "lvol_store_uuid": "98ccdb1f-c57f-40b2-8a47-adf8f53d3bff", 00:17:39.577 "base_bdev": "nvme0n1", 00:17:39.577 "thin_provision": true, 00:17:39.577 "snapshot": false, 00:17:39.577 "clone": false, 00:17:39.577 "esnap_clone": false 00:17:39.577 } 00:17:39.577 } 00:17:39.577 } 00:17:39.577 ]' 00:17:39.577 15:59:50 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:39.577 15:59:50 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:39.577 15:59:50 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:39.836 15:59:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:39.836 15:59:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:39.836 15:59:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:39.836 15:59:51 -- ftl/common.sh@41 -- # local base_size=5171 00:17:39.836 15:59:51 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:39.836 15:59:51 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:39.836 15:59:51 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:39.836 15:59:51 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:39.836 15:59:51 -- ftl/common.sh@48 -- # get_bdev_size bc148325-e5d5-49ee-93ea-66b336cd2670 00:17:39.836 15:59:51 -- common/autotest_common.sh@1367 -- # local bdev_name=bc148325-e5d5-49ee-93ea-66b336cd2670 00:17:39.836 15:59:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:39.836 15:59:51 -- common/autotest_common.sh@1369 -- # local bs 00:17:39.836 15:59:51 -- common/autotest_common.sh@1370 -- # local nb 00:17:39.836 15:59:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc148325-e5d5-49ee-93ea-66b336cd2670 00:17:40.094 15:59:51 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:40.094 { 00:17:40.094 "name": "bc148325-e5d5-49ee-93ea-66b336cd2670", 00:17:40.094 "aliases": [ 00:17:40.094 "lvs/nvme0n1p0" 00:17:40.094 ], 00:17:40.094 "product_name": "Logical Volume", 00:17:40.094 "block_size": 4096, 00:17:40.094 "num_blocks": 26476544, 00:17:40.094 "uuid": "bc148325-e5d5-49ee-93ea-66b336cd2670", 00:17:40.094 "assigned_rate_limits": { 00:17:40.094 "rw_ios_per_sec": 0, 00:17:40.094 "rw_mbytes_per_sec": 0, 00:17:40.094 "r_mbytes_per_sec": 0, 00:17:40.094 "w_mbytes_per_sec": 0 00:17:40.094 }, 00:17:40.094 "claimed": false, 00:17:40.094 "zoned": false, 00:17:40.094 "supported_io_types": { 00:17:40.094 "read": true, 00:17:40.094 "write": true, 00:17:40.094 "unmap": true, 00:17:40.094 "write_zeroes": true, 00:17:40.094 "flush": false, 00:17:40.094 "reset": true, 00:17:40.094 "compare": false, 00:17:40.094 "compare_and_write": false, 00:17:40.094 "abort": false, 00:17:40.094 "nvme_admin": false, 00:17:40.094 "nvme_io": false 00:17:40.094 }, 00:17:40.094 "driver_specific": { 00:17:40.094 "lvol": { 00:17:40.094 "lvol_store_uuid": "98ccdb1f-c57f-40b2-8a47-adf8f53d3bff", 00:17:40.094 "base_bdev": "nvme0n1", 00:17:40.094 "thin_provision": true, 00:17:40.094 "snapshot": false, 00:17:40.094 "clone": false, 00:17:40.094 "esnap_clone": false 00:17:40.094 } 00:17:40.094 } 00:17:40.094 } 00:17:40.094 ]' 00:17:40.094 15:59:51 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:40.094 15:59:51 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:40.094 15:59:51 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:40.352 15:59:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:40.353 15:59:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:40.353 15:59:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:40.353 15:59:51 -- ftl/common.sh@48 -- # cache_size=5171 00:17:40.353 15:59:51 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:40.353 15:59:51 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:40.353 15:59:51 -- ftl/restore.sh@48 -- # get_bdev_size bc148325-e5d5-49ee-93ea-66b336cd2670 00:17:40.353 15:59:51 -- common/autotest_common.sh@1367 -- # local bdev_name=bc148325-e5d5-49ee-93ea-66b336cd2670 00:17:40.353 15:59:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:40.353 15:59:51 -- common/autotest_common.sh@1369 -- # local bs 00:17:40.353 15:59:51 -- common/autotest_common.sh@1370 -- # local nb 00:17:40.353 15:59:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc148325-e5d5-49ee-93ea-66b336cd2670 00:17:40.611 15:59:51 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:40.611 { 00:17:40.611 "name": "bc148325-e5d5-49ee-93ea-66b336cd2670", 00:17:40.611 "aliases": [ 00:17:40.611 "lvs/nvme0n1p0" 00:17:40.611 ], 00:17:40.611 "product_name": "Logical Volume", 00:17:40.611 "block_size": 4096, 00:17:40.611 "num_blocks": 26476544, 00:17:40.611 "uuid": "bc148325-e5d5-49ee-93ea-66b336cd2670", 00:17:40.611 "assigned_rate_limits": { 00:17:40.611 "rw_ios_per_sec": 0, 00:17:40.611 "rw_mbytes_per_sec": 0, 00:17:40.611 "r_mbytes_per_sec": 0, 00:17:40.611 "w_mbytes_per_sec": 0 00:17:40.611 }, 00:17:40.611 "claimed": false, 00:17:40.611 "zoned": false, 00:17:40.611 "supported_io_types": { 00:17:40.611 "read": true, 00:17:40.611 "write": true, 00:17:40.611 "unmap": true, 00:17:40.611 "write_zeroes": true, 00:17:40.611 "flush": false, 00:17:40.611 "reset": true, 00:17:40.611 "compare": false, 00:17:40.611 "compare_and_write": false, 00:17:40.611 "abort": false, 00:17:40.611 "nvme_admin": false, 00:17:40.611 "nvme_io": false 00:17:40.611 }, 00:17:40.611 "driver_specific": { 00:17:40.611 "lvol": { 00:17:40.611 "lvol_store_uuid": "98ccdb1f-c57f-40b2-8a47-adf8f53d3bff", 00:17:40.611 "base_bdev": "nvme0n1", 00:17:40.611 "thin_provision": true, 00:17:40.611 "snapshot": false, 00:17:40.611 "clone": false, 00:17:40.611 "esnap_clone": false 00:17:40.611 } 00:17:40.611 } 00:17:40.611 } 00:17:40.611 ]' 00:17:40.611 15:59:51 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:40.611 15:59:51 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:40.611 15:59:51 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:40.611 15:59:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:40.611 15:59:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:40.611 15:59:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:40.611 15:59:51 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:40.611 15:59:51 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d bc148325-e5d5-49ee-93ea-66b336cd2670 --l2p_dram_limit 10' 00:17:40.611 15:59:51 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:40.611 15:59:51 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:17:40.611 15:59:51 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:40.611 15:59:51 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:40.611 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:40.611 15:59:51 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bc148325-e5d5-49ee-93ea-66b336cd2670 --l2p_dram_limit 10 -c nvc0n1p0 00:17:40.871 [2024-11-29 15:59:52.094544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.871 [2024-11-29 15:59:52.094680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:40.871 [2024-11-29 15:59:52.094701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:40.871 [2024-11-29 15:59:52.094710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.871 [2024-11-29 15:59:52.094756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.871 [2024-11-29 15:59:52.094764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.871 [2024-11-29 15:59:52.094772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:40.871 [2024-11-29 15:59:52.094778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.871 [2024-11-29 15:59:52.094795] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:40.871 [2024-11-29 15:59:52.095399] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:40.871 [2024-11-29 15:59:52.095415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.871 [2024-11-29 15:59:52.095421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.871 [2024-11-29 15:59:52.095429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:17:40.871 [2024-11-29 15:59:52.095435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.871 [2024-11-29 15:59:52.095485] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2b5b9a76-0a17-4aa5-bab0-1e3efc4353d5 00:17:40.871 [2024-11-29 15:59:52.096436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.871 [2024-11-29 15:59:52.096459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:40.871 [2024-11-29 15:59:52.096468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:40.871 [2024-11-29 15:59:52.096475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.871 [2024-11-29 15:59:52.101253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.871 [2024-11-29 15:59:52.101362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.871 [2024-11-29 15:59:52.101374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.745 ms 00:17:40.871 [2024-11-29 15:59:52.101382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.871 [2024-11-29 15:59:52.101451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.871 [2024-11-29 15:59:52.101460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.871 [2024-11-29 15:59:52.101467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:40.871 [2024-11-29 15:59:52.101476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.871 [2024-11-29 15:59:52.101518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.871 [2024-11-29 15:59:52.101526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:40.871 [2024-11-29 15:59:52.101531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:40.871 [2024-11-29 15:59:52.101538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.871 [2024-11-29 15:59:52.101557] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:40.871 [2024-11-29 15:59:52.104510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.871 [2024-11-29 15:59:52.104606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.871 [2024-11-29 15:59:52.104622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.958 ms 00:17:40.871 [2024-11-29 15:59:52.104629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.871 [2024-11-29 15:59:52.104658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.871 [2024-11-29 15:59:52.104664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:40.871 [2024-11-29 15:59:52.104671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:40.871 [2024-11-29 15:59:52.104677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.871 [2024-11-29 15:59:52.104693] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:40.871 [2024-11-29 15:59:52.104777] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:40.871 [2024-11-29 15:59:52.104789] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:40.871 [2024-11-29 15:59:52.104797] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:40.871 [2024-11-29 15:59:52.104806] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:40.871 [2024-11-29 15:59:52.104814] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:40.871 [2024-11-29 15:59:52.104822] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:40.871 [2024-11-29 15:59:52.104832] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:40.871 [2024-11-29 15:59:52.104839] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:40.871 [2024-11-29 15:59:52.104845] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:40.871 [2024-11-29 15:59:52.104852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.871 [2024-11-29 15:59:52.104858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:40.871 [2024-11-29 15:59:52.104865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:17:40.871 [2024-11-29 15:59:52.104871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.871 [2024-11-29 15:59:52.104920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.871 [2024-11-29 15:59:52.104926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:40.871 [2024-11-29 15:59:52.104935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:40.871 [2024-11-29 15:59:52.104940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.871 [2024-11-29 15:59:52.105009] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:40.871 [2024-11-29 15:59:52.105017] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:40.871 [2024-11-29 15:59:52.105025] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.871 [2024-11-29 15:59:52.105031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.871 [2024-11-29 15:59:52.105038] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:40.871 [2024-11-29 15:59:52.105042] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:40.871 [2024-11-29 15:59:52.105049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:40.871 [2024-11-29 15:59:52.105054] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:40.871 [2024-11-29 15:59:52.105061] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:40.871 [2024-11-29 15:59:52.105065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.871 [2024-11-29 15:59:52.105072] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:40.871 [2024-11-29 15:59:52.105077] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:40.871 [2024-11-29 15:59:52.105084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.871 [2024-11-29 15:59:52.105089] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:40.871 [2024-11-29 15:59:52.105098] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:40.871 [2024-11-29 15:59:52.105103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.871 [2024-11-29 15:59:52.105111] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:40.871 [2024-11-29 15:59:52.105116] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:40.871 [2024-11-29 15:59:52.105123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.871 [2024-11-29 15:59:52.105127] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:40.871 [2024-11-29 15:59:52.105134] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:40.871 [2024-11-29 15:59:52.105140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:40.871 [2024-11-29 15:59:52.105147] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:40.871 [2024-11-29 15:59:52.105152] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:40.871 [2024-11-29 15:59:52.105158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:40.871 [2024-11-29 15:59:52.105163] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:40.871 [2024-11-29 15:59:52.105169] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:40.871 [2024-11-29 15:59:52.105173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:40.871 [2024-11-29 15:59:52.105179] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:40.871 [2024-11-29 15:59:52.105184] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:40.871 [2024-11-29 15:59:52.105190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:40.872 [2024-11-29 15:59:52.105195] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:40.872 [2024-11-29 15:59:52.105202] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:40.872 [2024-11-29 15:59:52.105208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:40.872 [2024-11-29 15:59:52.105213] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:40.872 [2024-11-29 15:59:52.105218] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:40.872 [2024-11-29 15:59:52.105224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.872 [2024-11-29 15:59:52.105229] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:40.872 [2024-11-29 15:59:52.105235] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:40.872 [2024-11-29 15:59:52.105240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.872 [2024-11-29 15:59:52.105247] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:40.872 [2024-11-29 15:59:52.105252] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:40.872 [2024-11-29 15:59:52.105259] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.872 [2024-11-29 15:59:52.105266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.872 [2024-11-29 15:59:52.105273] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:40.872 [2024-11-29 15:59:52.105278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:40.872 [2024-11-29 15:59:52.105285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:40.872 [2024-11-29 15:59:52.105290] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:40.872 [2024-11-29 15:59:52.105298] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:40.872 [2024-11-29 15:59:52.105303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:40.872 [2024-11-29 15:59:52.105311] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:40.872 [2024-11-29 15:59:52.105318] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.872 [2024-11-29 15:59:52.105325] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:40.872 [2024-11-29 15:59:52.105342] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:40.872 [2024-11-29 15:59:52.105349] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:40.872 [2024-11-29 15:59:52.105354] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:40.872 [2024-11-29 15:59:52.105360] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:40.872 [2024-11-29 15:59:52.105365] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:40.872 [2024-11-29 15:59:52.105372] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:40.872 [2024-11-29 15:59:52.105377] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:40.872 [2024-11-29 15:59:52.105384] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:40.872 [2024-11-29 15:59:52.105389] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:40.872 [2024-11-29 15:59:52.105395] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:40.872 [2024-11-29 15:59:52.105401] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:40.872 [2024-11-29 15:59:52.105410] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:40.872 [2024-11-29 15:59:52.105415] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:40.872 [2024-11-29 15:59:52.105422] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.872 [2024-11-29 15:59:52.105428] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:40.872 [2024-11-29 15:59:52.105434] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:40.872 [2024-11-29 15:59:52.105440] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:40.872 [2024-11-29 15:59:52.105446] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:40.872 [2024-11-29 15:59:52.105453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.872 [2024-11-29 15:59:52.105460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:40.872 [2024-11-29 15:59:52.105466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:17:40.872 [2024-11-29 15:59:52.105472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.872 [2024-11-29 15:59:52.117644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.872 [2024-11-29 15:59:52.117679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.872 [2024-11-29 15:59:52.117694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.128 ms 00:17:40.872 [2024-11-29 15:59:52.117702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.872 [2024-11-29 15:59:52.117770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.872 [2024-11-29 15:59:52.117781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:40.872 [2024-11-29 15:59:52.117787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:40.872 [2024-11-29 15:59:52.117794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.872 [2024-11-29 15:59:52.141858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.872 [2024-11-29 15:59:52.141886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.872 [2024-11-29 15:59:52.141895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.031 ms 00:17:40.872 [2024-11-29 15:59:52.141905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.872 [2024-11-29 15:59:52.141927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.872 [2024-11-29 15:59:52.141936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.872 [2024-11-29 15:59:52.141942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:40.872 [2024-11-29 15:59:52.141953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.872 [2024-11-29 15:59:52.142262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.872 [2024-11-29 15:59:52.142277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.872 [2024-11-29 15:59:52.142283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:40.872 [2024-11-29 15:59:52.142290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.872 [2024-11-29 15:59:52.142376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.872 [2024-11-29 15:59:52.142386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.872 [2024-11-29 15:59:52.142392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:40.872 [2024-11-29 15:59:52.142399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.872 [2024-11-29 15:59:52.154558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.872 [2024-11-29 15:59:52.154585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.872 [2024-11-29 15:59:52.154593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.143 ms 00:17:40.872 [2024-11-29 15:59:52.154602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.872 [2024-11-29 15:59:52.163601] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:40.872 [2024-11-29 15:59:52.165962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.872 [2024-11-29 15:59:52.165994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:40.872 [2024-11-29 15:59:52.166005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.304 ms 00:17:40.872 [2024-11-29 15:59:52.166011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.872 [2024-11-29 15:59:52.227451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.872 [2024-11-29 15:59:52.227484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:40.872 [2024-11-29 15:59:52.227497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.414 ms 00:17:40.872 [2024-11-29 15:59:52.227503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.872 [2024-11-29 15:59:52.227527] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:40.872 [2024-11-29 15:59:52.227536] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:44.160 [2024-11-29 15:59:55.462133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.160 [2024-11-29 15:59:55.462221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:44.160 [2024-11-29 15:59:55.462244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3234.578 ms 00:17:44.160 [2024-11-29 15:59:55.462253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.160 [2024-11-29 15:59:55.462488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.160 [2024-11-29 15:59:55.462505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.160 [2024-11-29 15:59:55.462518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:17:44.160 [2024-11-29 15:59:55.462528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.160 [2024-11-29 15:59:55.490117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.160 [2024-11-29 15:59:55.490174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:44.160 [2024-11-29 15:59:55.490193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.522 ms 00:17:44.160 [2024-11-29 15:59:55.490203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.160 [2024-11-29 15:59:55.516213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.160 [2024-11-29 15:59:55.516444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:44.160 [2024-11-29 15:59:55.516479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.941 ms 00:17:44.160 [2024-11-29 15:59:55.516488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.160 [2024-11-29 15:59:55.516889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.160 [2024-11-29 15:59:55.516902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.160 [2024-11-29 15:59:55.516914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:17:44.160 [2024-11-29 15:59:55.516924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.423 [2024-11-29 15:59:55.590766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.423 [2024-11-29 15:59:55.590822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:44.423 [2024-11-29 15:59:55.590841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.752 ms 00:17:44.423 [2024-11-29 15:59:55.590849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.423 [2024-11-29 15:59:55.619278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.423 [2024-11-29 15:59:55.619334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:44.423 [2024-11-29 15:59:55.619351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.365 ms 00:17:44.423 [2024-11-29 15:59:55.619360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.423 [2024-11-29 15:59:55.620862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.423 [2024-11-29 15:59:55.620916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:44.423 [2024-11-29 15:59:55.620932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:17:44.423 [2024-11-29 15:59:55.620940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.423 [2024-11-29 15:59:55.648067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.423 [2024-11-29 15:59:55.648119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.423 [2024-11-29 15:59:55.648137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.037 ms 00:17:44.423 [2024-11-29 15:59:55.648144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.423 [2024-11-29 15:59:55.648213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.423 [2024-11-29 15:59:55.648223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.423 [2024-11-29 15:59:55.648236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:44.423 [2024-11-29 15:59:55.648248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.423 [2024-11-29 15:59:55.648363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.423 [2024-11-29 15:59:55.648374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.423 [2024-11-29 15:59:55.648386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:44.423 [2024-11-29 15:59:55.648394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.423 [2024-11-29 15:59:55.649569] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3554.504 ms, result 0 00:17:44.423 { 00:17:44.423 "name": "ftl0", 00:17:44.423 "uuid": "2b5b9a76-0a17-4aa5-bab0-1e3efc4353d5" 00:17:44.423 } 00:17:44.423 15:59:55 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:44.423 15:59:55 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:44.684 15:59:55 -- ftl/restore.sh@63 -- # echo ']}' 00:17:44.684 15:59:55 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:44.684 [2024-11-29 15:59:56.084915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.684 [2024-11-29 15:59:56.084999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:44.684 [2024-11-29 15:59:56.085013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.684 [2024-11-29 15:59:56.085024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.684 [2024-11-29 15:59:56.085050] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:44.684 [2024-11-29 15:59:56.088030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.684 [2024-11-29 15:59:56.088079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:44.684 [2024-11-29 15:59:56.088094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.951 ms 00:17:44.684 [2024-11-29 15:59:56.088114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.685 [2024-11-29 15:59:56.088392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.685 [2024-11-29 15:59:56.088410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:44.685 [2024-11-29 15:59:56.088422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:17:44.685 [2024-11-29 15:59:56.088431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.685 [2024-11-29 15:59:56.091704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.685 [2024-11-29 15:59:56.091729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:44.685 [2024-11-29 15:59:56.091742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.239 ms 00:17:44.685 [2024-11-29 15:59:56.091750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.685 [2024-11-29 15:59:56.097858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.685 [2024-11-29 15:59:56.097903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:44.685 [2024-11-29 15:59:56.097919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.079 ms 00:17:44.685 [2024-11-29 15:59:56.097928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.947 [2024-11-29 15:59:56.125034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.947 [2024-11-29 15:59:56.125245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:44.947 [2024-11-29 15:59:56.125276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.986 ms 00:17:44.947 [2024-11-29 15:59:56.125284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.947 [2024-11-29 15:59:56.143054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.948 [2024-11-29 15:59:56.143105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:44.948 [2024-11-29 15:59:56.143123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.692 ms 00:17:44.948 [2024-11-29 15:59:56.143132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.948 [2024-11-29 15:59:56.143314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.948 [2024-11-29 15:59:56.143326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:44.948 [2024-11-29 15:59:56.143341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:17:44.948 [2024-11-29 15:59:56.143349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.948 [2024-11-29 15:59:56.170027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.948 [2024-11-29 15:59:56.170077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:44.948 [2024-11-29 15:59:56.170092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.648 ms 00:17:44.948 [2024-11-29 15:59:56.170100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.948 [2024-11-29 15:59:56.196347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.948 [2024-11-29 15:59:56.196547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:44.948 [2024-11-29 15:59:56.196575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.187 ms 00:17:44.948 [2024-11-29 15:59:56.196583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.948 [2024-11-29 15:59:56.222658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.948 [2024-11-29 15:59:56.222705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:44.948 [2024-11-29 15:59:56.222720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.985 ms 00:17:44.948 [2024-11-29 15:59:56.222727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.948 [2024-11-29 15:59:56.248703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.948 [2024-11-29 15:59:56.248751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:44.948 [2024-11-29 15:59:56.248767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.856 ms 00:17:44.948 [2024-11-29 15:59:56.248774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.948 [2024-11-29 15:59:56.248832] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:44.948 [2024-11-29 15:59:56.248848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.248861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.248870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.248881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.248889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.248899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.248907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.248917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.248925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.248935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.248942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.248952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.248959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.248989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.248997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:44.948 [2024-11-29 15:59:56.249455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:44.949 [2024-11-29 15:59:56.249808] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:44.949 [2024-11-29 15:59:56.249820] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2b5b9a76-0a17-4aa5-bab0-1e3efc4353d5 00:17:44.949 [2024-11-29 15:59:56.249828] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:44.949 [2024-11-29 15:59:56.249837] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:44.949 [2024-11-29 15:59:56.249844] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:44.949 [2024-11-29 15:59:56.249855] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:44.949 [2024-11-29 15:59:56.249863] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:44.949 [2024-11-29 15:59:56.249873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:44.949 [2024-11-29 15:59:56.249881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:44.949 [2024-11-29 15:59:56.249889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:44.949 [2024-11-29 15:59:56.249896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:44.949 [2024-11-29 15:59:56.249907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-29 15:59:56.249918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:44.949 [2024-11-29 15:59:56.249929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:17:44.949 [2024-11-29 15:59:56.249937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-29 15:59:56.263594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-29 15:59:56.263639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:44.949 [2024-11-29 15:59:56.263653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.582 ms 00:17:44.949 [2024-11-29 15:59:56.263662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-29 15:59:56.263884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.949 [2024-11-29 15:59:56.263893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:44.949 [2024-11-29 15:59:56.263903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:17:44.949 [2024-11-29 15:59:56.263911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-29 15:59:56.313337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.949 [2024-11-29 15:59:56.313401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.949 [2024-11-29 15:59:56.313416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.949 [2024-11-29 15:59:56.313424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-29 15:59:56.313509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.949 [2024-11-29 15:59:56.313518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.949 [2024-11-29 15:59:56.313528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.949 [2024-11-29 15:59:56.313536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-29 15:59:56.313628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.949 [2024-11-29 15:59:56.313639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.949 [2024-11-29 15:59:56.313650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.949 [2024-11-29 15:59:56.313657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.949 [2024-11-29 15:59:56.313678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.949 [2024-11-29 15:59:56.313719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.949 [2024-11-29 15:59:56.313730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.949 [2024-11-29 15:59:56.313737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.211 [2024-11-29 15:59:56.397310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.211 [2024-11-29 15:59:56.397544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:45.211 [2024-11-29 15:59:56.397574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.211 [2024-11-29 15:59:56.397584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.211 [2024-11-29 15:59:56.429666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.211 [2024-11-29 15:59:56.429733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:45.211 [2024-11-29 15:59:56.429746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.211 [2024-11-29 15:59:56.429754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.211 [2024-11-29 15:59:56.429834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.211 [2024-11-29 15:59:56.429844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:45.211 [2024-11-29 15:59:56.429855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.211 [2024-11-29 15:59:56.429863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.211 [2024-11-29 15:59:56.429915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.211 [2024-11-29 15:59:56.429926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:45.211 [2024-11-29 15:59:56.429939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.211 [2024-11-29 15:59:56.429946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.211 [2024-11-29 15:59:56.430086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.211 [2024-11-29 15:59:56.430098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:45.211 [2024-11-29 15:59:56.430109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.211 [2024-11-29 15:59:56.430117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.211 [2024-11-29 15:59:56.430163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.211 [2024-11-29 15:59:56.430172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:45.211 [2024-11-29 15:59:56.430183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.211 [2024-11-29 15:59:56.430193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.211 [2024-11-29 15:59:56.430236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.211 [2024-11-29 15:59:56.430246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:45.211 [2024-11-29 15:59:56.430256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.211 [2024-11-29 15:59:56.430264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.211 [2024-11-29 15:59:56.430318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.211 [2024-11-29 15:59:56.430328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:45.211 [2024-11-29 15:59:56.430341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.211 [2024-11-29 15:59:56.430349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.211 [2024-11-29 15:59:56.430500] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.538 ms, result 0 00:17:45.211 true 00:17:45.211 15:59:56 -- ftl/restore.sh@66 -- # killprocess 72735 00:17:45.211 15:59:56 -- common/autotest_common.sh@936 -- # '[' -z 72735 ']' 00:17:45.211 15:59:56 -- common/autotest_common.sh@940 -- # kill -0 72735 00:17:45.211 15:59:56 -- common/autotest_common.sh@941 -- # uname 00:17:45.211 15:59:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.211 15:59:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72735 00:17:45.211 killing process with pid 72735 00:17:45.211 15:59:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:45.211 15:59:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:45.211 15:59:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72735' 00:17:45.211 15:59:56 -- common/autotest_common.sh@955 -- # kill 72735 00:17:45.211 15:59:56 -- common/autotest_common.sh@960 -- # wait 72735 00:17:51.802 16:00:02 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:56.009 262144+0 records in 00:17:56.009 262144+0 records out 00:17:56.009 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.41241 s, 243 MB/s 00:17:56.009 16:00:06 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:57.395 16:00:08 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:57.395 [2024-11-29 16:00:08.604374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:57.395 [2024-11-29 16:00:08.604470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72973 ] 00:17:57.395 [2024-11-29 16:00:08.748094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.656 [2024-11-29 16:00:08.928539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.917 [2024-11-29 16:00:09.198819] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:57.917 [2024-11-29 16:00:09.199229] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:58.179 [2024-11-29 16:00:09.354762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.179 [2024-11-29 16:00:09.354825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:58.179 [2024-11-29 16:00:09.354841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:58.179 [2024-11-29 16:00:09.354852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.179 [2024-11-29 16:00:09.354908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.179 [2024-11-29 16:00:09.354919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:58.179 [2024-11-29 16:00:09.354928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:58.179 [2024-11-29 16:00:09.354936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.179 [2024-11-29 16:00:09.354956] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:58.179 [2024-11-29 16:00:09.355769] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:58.179 [2024-11-29 16:00:09.355796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.179 [2024-11-29 16:00:09.355805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:58.179 [2024-11-29 16:00:09.355814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:17:58.179 [2024-11-29 16:00:09.355822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.179 [2024-11-29 16:00:09.357527] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:58.179 [2024-11-29 16:00:09.371963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.179 [2024-11-29 16:00:09.372020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:58.179 [2024-11-29 16:00:09.372034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.439 ms 00:17:58.179 [2024-11-29 16:00:09.372042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.179 [2024-11-29 16:00:09.372120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.179 [2024-11-29 16:00:09.372130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:58.179 [2024-11-29 16:00:09.372138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:58.180 [2024-11-29 16:00:09.372146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.180 [2024-11-29 16:00:09.381181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.180 [2024-11-29 16:00:09.381224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:58.180 [2024-11-29 16:00:09.381235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.954 ms 00:17:58.180 [2024-11-29 16:00:09.381244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.180 [2024-11-29 16:00:09.381345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.180 [2024-11-29 16:00:09.381355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:58.180 [2024-11-29 16:00:09.381364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:58.180 [2024-11-29 16:00:09.381372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.180 [2024-11-29 16:00:09.381420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.180 [2024-11-29 16:00:09.381429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:58.180 [2024-11-29 16:00:09.381438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:58.180 [2024-11-29 16:00:09.381446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.180 [2024-11-29 16:00:09.381476] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:58.180 [2024-11-29 16:00:09.385795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.180 [2024-11-29 16:00:09.385836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:58.180 [2024-11-29 16:00:09.385847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.332 ms 00:17:58.180 [2024-11-29 16:00:09.385855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.180 [2024-11-29 16:00:09.385897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.180 [2024-11-29 16:00:09.385906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:58.180 [2024-11-29 16:00:09.385916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:58.180 [2024-11-29 16:00:09.385926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.180 [2024-11-29 16:00:09.385998] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:58.180 [2024-11-29 16:00:09.386023] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:58.180 [2024-11-29 16:00:09.386059] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:58.180 [2024-11-29 16:00:09.386075] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:58.180 [2024-11-29 16:00:09.386152] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:58.180 [2024-11-29 16:00:09.386163] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:58.180 [2024-11-29 16:00:09.386176] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:58.180 [2024-11-29 16:00:09.386187] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:58.180 [2024-11-29 16:00:09.386196] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:58.180 [2024-11-29 16:00:09.386204] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:58.180 [2024-11-29 16:00:09.386212] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:58.180 [2024-11-29 16:00:09.386220] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:58.180 [2024-11-29 16:00:09.386228] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:58.180 [2024-11-29 16:00:09.386236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.180 [2024-11-29 16:00:09.386243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:58.180 [2024-11-29 16:00:09.386252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:17:58.180 [2024-11-29 16:00:09.386259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.180 [2024-11-29 16:00:09.386326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.180 [2024-11-29 16:00:09.386335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:58.180 [2024-11-29 16:00:09.386343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:58.180 [2024-11-29 16:00:09.386350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.180 [2024-11-29 16:00:09.386420] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:58.180 [2024-11-29 16:00:09.386430] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:58.180 [2024-11-29 16:00:09.386438] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:58.180 [2024-11-29 16:00:09.386446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.180 [2024-11-29 16:00:09.386454] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:58.180 [2024-11-29 16:00:09.386461] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:58.180 [2024-11-29 16:00:09.386468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:58.180 [2024-11-29 16:00:09.386477] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:58.180 [2024-11-29 16:00:09.386487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:58.180 [2024-11-29 16:00:09.386495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:58.180 [2024-11-29 16:00:09.386502] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:58.180 [2024-11-29 16:00:09.386509] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:58.180 [2024-11-29 16:00:09.386516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:58.180 [2024-11-29 16:00:09.386523] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:58.180 [2024-11-29 16:00:09.386531] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:58.180 [2024-11-29 16:00:09.386538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.180 [2024-11-29 16:00:09.386552] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:58.180 [2024-11-29 16:00:09.386560] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:58.180 [2024-11-29 16:00:09.386566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.180 [2024-11-29 16:00:09.386573] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:58.180 [2024-11-29 16:00:09.386580] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:58.180 [2024-11-29 16:00:09.386587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:58.180 [2024-11-29 16:00:09.386595] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:58.180 [2024-11-29 16:00:09.386603] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:58.180 [2024-11-29 16:00:09.386610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:58.180 [2024-11-29 16:00:09.386616] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:58.180 [2024-11-29 16:00:09.386623] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:58.180 [2024-11-29 16:00:09.386630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:58.180 [2024-11-29 16:00:09.386636] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:58.180 [2024-11-29 16:00:09.386643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:58.180 [2024-11-29 16:00:09.386649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:58.180 [2024-11-29 16:00:09.386656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:58.180 [2024-11-29 16:00:09.386663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:58.180 [2024-11-29 16:00:09.386670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:58.180 [2024-11-29 16:00:09.386676] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:58.180 [2024-11-29 16:00:09.386683] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:58.180 [2024-11-29 16:00:09.386689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:58.180 [2024-11-29 16:00:09.386695] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:58.180 [2024-11-29 16:00:09.386702] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:58.180 [2024-11-29 16:00:09.386708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:58.180 [2024-11-29 16:00:09.386716] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:58.180 [2024-11-29 16:00:09.386727] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:58.180 [2024-11-29 16:00:09.386735] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:58.180 [2024-11-29 16:00:09.386743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.180 [2024-11-29 16:00:09.386751] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:58.180 [2024-11-29 16:00:09.386758] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:58.180 [2024-11-29 16:00:09.386764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:58.180 [2024-11-29 16:00:09.386772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:58.180 [2024-11-29 16:00:09.386779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:58.180 [2024-11-29 16:00:09.386785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:58.180 [2024-11-29 16:00:09.386794] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:58.180 [2024-11-29 16:00:09.386804] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:58.180 [2024-11-29 16:00:09.386824] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:58.180 [2024-11-29 16:00:09.386832] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:58.180 [2024-11-29 16:00:09.386840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:58.180 [2024-11-29 16:00:09.386847] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:58.180 [2024-11-29 16:00:09.386855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:58.181 [2024-11-29 16:00:09.386862] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:58.181 [2024-11-29 16:00:09.386869] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:58.181 [2024-11-29 16:00:09.386877] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:58.181 [2024-11-29 16:00:09.386884] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:58.181 [2024-11-29 16:00:09.386891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:58.181 [2024-11-29 16:00:09.386898] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:58.181 [2024-11-29 16:00:09.386905] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:58.181 [2024-11-29 16:00:09.386913] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:58.181 [2024-11-29 16:00:09.386920] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:58.181 [2024-11-29 16:00:09.386928] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:58.181 [2024-11-29 16:00:09.386937] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:58.181 [2024-11-29 16:00:09.386944] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:58.181 [2024-11-29 16:00:09.386952] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:58.181 [2024-11-29 16:00:09.386959] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:58.181 [2024-11-29 16:00:09.386966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.181 [2024-11-29 16:00:09.386990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:58.181 [2024-11-29 16:00:09.386998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:17:58.181 [2024-11-29 16:00:09.387006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.181 [2024-11-29 16:00:09.405815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.181 [2024-11-29 16:00:09.405866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.181 [2024-11-29 16:00:09.405880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.765 ms 00:17:58.181 [2024-11-29 16:00:09.405894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.181 [2024-11-29 16:00:09.406009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.181 [2024-11-29 16:00:09.406019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:58.181 [2024-11-29 16:00:09.406027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:58.181 [2024-11-29 16:00:09.406036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.181 [2024-11-29 16:00:09.455174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.181 [2024-11-29 16:00:09.455234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.181 [2024-11-29 16:00:09.455248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.083 ms 00:17:58.181 [2024-11-29 16:00:09.455256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.181 [2024-11-29 16:00:09.455309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.181 [2024-11-29 16:00:09.455320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.181 [2024-11-29 16:00:09.455329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:58.181 [2024-11-29 16:00:09.455337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.181 [2024-11-29 16:00:09.455912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.181 [2024-11-29 16:00:09.455936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.181 [2024-11-29 16:00:09.455947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:17:58.181 [2024-11-29 16:00:09.455961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.181 [2024-11-29 16:00:09.456128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.181 [2024-11-29 16:00:09.456140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.181 [2024-11-29 16:00:09.456149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:17:58.181 [2024-11-29 16:00:09.456157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.181 [2024-11-29 16:00:09.473017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.181 [2024-11-29 16:00:09.473060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.181 [2024-11-29 16:00:09.473072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.836 ms 00:17:58.181 [2024-11-29 16:00:09.473081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.181 [2024-11-29 16:00:09.487374] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:58.181 [2024-11-29 16:00:09.487429] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:58.181 [2024-11-29 16:00:09.487443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.181 [2024-11-29 16:00:09.487452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:58.181 [2024-11-29 16:00:09.487462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.248 ms 00:17:58.181 [2024-11-29 16:00:09.487470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.181 [2024-11-29 16:00:09.513446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.181 [2024-11-29 16:00:09.513494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:58.181 [2024-11-29 16:00:09.513507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.919 ms 00:17:58.181 [2024-11-29 16:00:09.513516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.181 [2024-11-29 16:00:09.526650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.181 [2024-11-29 16:00:09.526836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:58.181 [2024-11-29 16:00:09.526857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.075 ms 00:17:58.181 [2024-11-29 16:00:09.526865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.181 [2024-11-29 16:00:09.539796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.181 [2024-11-29 16:00:09.539860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:58.181 [2024-11-29 16:00:09.539884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.891 ms 00:17:58.181 [2024-11-29 16:00:09.539890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.181 [2024-11-29 16:00:09.540306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.181 [2024-11-29 16:00:09.540321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:58.181 [2024-11-29 16:00:09.540330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:17:58.181 [2024-11-29 16:00:09.540338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.181 [2024-11-29 16:00:09.606780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.181 [2024-11-29 16:00:09.606843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:58.181 [2024-11-29 16:00:09.606859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.424 ms 00:17:58.181 [2024-11-29 16:00:09.606868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.443 [2024-11-29 16:00:09.618379] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:58.443 [2024-11-29 16:00:09.621422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.443 [2024-11-29 16:00:09.621469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:58.443 [2024-11-29 16:00:09.621482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.492 ms 00:17:58.443 [2024-11-29 16:00:09.621491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.443 [2024-11-29 16:00:09.621571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.443 [2024-11-29 16:00:09.621582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:58.443 [2024-11-29 16:00:09.621592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:58.443 [2024-11-29 16:00:09.621600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.443 [2024-11-29 16:00:09.621669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.443 [2024-11-29 16:00:09.621680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:58.443 [2024-11-29 16:00:09.621719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:58.443 [2024-11-29 16:00:09.621728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.443 [2024-11-29 16:00:09.623146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.443 [2024-11-29 16:00:09.623190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:58.443 [2024-11-29 16:00:09.623201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.398 ms 00:17:58.443 [2024-11-29 16:00:09.623209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.443 [2024-11-29 16:00:09.623245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.443 [2024-11-29 16:00:09.623254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:58.443 [2024-11-29 16:00:09.623263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:58.443 [2024-11-29 16:00:09.623276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.443 [2024-11-29 16:00:09.623314] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:58.443 [2024-11-29 16:00:09.623324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.443 [2024-11-29 16:00:09.623332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:58.443 [2024-11-29 16:00:09.623343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:58.443 [2024-11-29 16:00:09.623350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.443 [2024-11-29 16:00:09.649675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.443 [2024-11-29 16:00:09.649734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:58.443 [2024-11-29 16:00:09.649747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.303 ms 00:17:58.443 [2024-11-29 16:00:09.649755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.443 [2024-11-29 16:00:09.649842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.443 [2024-11-29 16:00:09.649860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:58.443 [2024-11-29 16:00:09.649870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:58.443 [2024-11-29 16:00:09.649878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.443 [2024-11-29 16:00:09.651152] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.867 ms, result 0 00:17:59.388  [2024-11-29T16:00:11.760Z] Copying: 10112/1048576 [kB] (10112 kBps) [2024-11-29T16:00:12.697Z] Copying: 33/1024 [MB] (23 MBps) [2024-11-29T16:00:14.085Z] Copying: 66/1024 [MB] (32 MBps) [2024-11-29T16:00:14.686Z] Copying: 93/1024 [MB] (27 MBps) [2024-11-29T16:00:16.085Z] Copying: 110/1024 [MB] (16 MBps) [2024-11-29T16:00:17.026Z] Copying: 126/1024 [MB] (16 MBps) [2024-11-29T16:00:17.965Z] Copying: 139/1024 [MB] (12 MBps) [2024-11-29T16:00:18.905Z] Copying: 161/1024 [MB] (22 MBps) [2024-11-29T16:00:19.845Z] Copying: 184/1024 [MB] (22 MBps) [2024-11-29T16:00:20.785Z] Copying: 204/1024 [MB] (20 MBps) [2024-11-29T16:00:21.727Z] Copying: 221/1024 [MB] (17 MBps) [2024-11-29T16:00:22.671Z] Copying: 244/1024 [MB] (22 MBps) [2024-11-29T16:00:24.057Z] Copying: 266/1024 [MB] (22 MBps) [2024-11-29T16:00:25.001Z] Copying: 288/1024 [MB] (21 MBps) [2024-11-29T16:00:25.944Z] Copying: 303/1024 [MB] (15 MBps) [2024-11-29T16:00:26.889Z] Copying: 316/1024 [MB] (12 MBps) [2024-11-29T16:00:27.832Z] Copying: 326/1024 [MB] (10 MBps) [2024-11-29T16:00:28.775Z] Copying: 341/1024 [MB] (14 MBps) [2024-11-29T16:00:29.711Z] Copying: 359504/1048576 [kB] (10112 kBps) [2024-11-29T16:00:31.099Z] Copying: 369/1024 [MB] (18 MBps) [2024-11-29T16:00:31.673Z] Copying: 399/1024 [MB] (29 MBps) [2024-11-29T16:00:33.055Z] Copying: 410/1024 [MB] (11 MBps) [2024-11-29T16:00:34.001Z] Copying: 422/1024 [MB] (12 MBps) [2024-11-29T16:00:34.945Z] Copying: 433/1024 [MB] (11 MBps) [2024-11-29T16:00:35.886Z] Copying: 448/1024 [MB] (15 MBps) [2024-11-29T16:00:36.829Z] Copying: 468/1024 [MB] (19 MBps) [2024-11-29T16:00:37.773Z] Copying: 483/1024 [MB] (15 MBps) [2024-11-29T16:00:38.717Z] Copying: 501/1024 [MB] (17 MBps) [2024-11-29T16:00:40.106Z] Copying: 515/1024 [MB] (14 MBps) [2024-11-29T16:00:40.680Z] Copying: 531/1024 [MB] (16 MBps) [2024-11-29T16:00:42.069Z] Copying: 552/1024 [MB] (20 MBps) [2024-11-29T16:00:43.012Z] Copying: 565/1024 [MB] (12 MBps) [2024-11-29T16:00:43.991Z] Copying: 582/1024 [MB] (16 MBps) [2024-11-29T16:00:44.958Z] Copying: 603/1024 [MB] (21 MBps) [2024-11-29T16:00:45.900Z] Copying: 618/1024 [MB] (14 MBps) [2024-11-29T16:00:46.846Z] Copying: 630/1024 [MB] (12 MBps) [2024-11-29T16:00:47.785Z] Copying: 645/1024 [MB] (15 MBps) [2024-11-29T16:00:48.729Z] Copying: 676/1024 [MB] (30 MBps) [2024-11-29T16:00:49.672Z] Copying: 700/1024 [MB] (24 MBps) [2024-11-29T16:00:51.059Z] Copying: 719/1024 [MB] (18 MBps) [2024-11-29T16:00:51.998Z] Copying: 730/1024 [MB] (10 MBps) [2024-11-29T16:00:52.940Z] Copying: 741/1024 [MB] (11 MBps) [2024-11-29T16:00:53.882Z] Copying: 754/1024 [MB] (12 MBps) [2024-11-29T16:00:54.824Z] Copying: 765/1024 [MB] (11 MBps) [2024-11-29T16:00:55.765Z] Copying: 776/1024 [MB] (10 MBps) [2024-11-29T16:00:56.712Z] Copying: 794/1024 [MB] (17 MBps) [2024-11-29T16:00:58.102Z] Copying: 813/1024 [MB] (19 MBps) [2024-11-29T16:00:58.675Z] Copying: 824/1024 [MB] (10 MBps) [2024-11-29T16:01:00.063Z] Copying: 839/1024 [MB] (14 MBps) [2024-11-29T16:01:01.007Z] Copying: 855/1024 [MB] (16 MBps) [2024-11-29T16:01:01.950Z] Copying: 875/1024 [MB] (19 MBps) [2024-11-29T16:01:02.891Z] Copying: 891/1024 [MB] (16 MBps) [2024-11-29T16:01:03.831Z] Copying: 911/1024 [MB] (20 MBps) [2024-11-29T16:01:04.778Z] Copying: 933/1024 [MB] (21 MBps) [2024-11-29T16:01:05.719Z] Copying: 950/1024 [MB] (16 MBps) [2024-11-29T16:01:07.106Z] Copying: 982936/1048576 [kB] (10096 kBps) [2024-11-29T16:01:07.672Z] Copying: 993024/1048576 [kB] (10088 kBps) [2024-11-29T16:01:08.610Z] Copying: 994/1024 [MB] (24 MBps) [2024-11-29T16:01:08.610Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-29 16:01:08.419207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-11-29 16:01:08.419242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:57.179 [2024-11-29 16:01:08.419253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:57.179 [2024-11-29 16:01:08.419260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-11-29 16:01:08.419275] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:57.179 [2024-11-29 16:01:08.421355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-11-29 16:01:08.421379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:57.179 [2024-11-29 16:01:08.421392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:18:57.179 [2024-11-29 16:01:08.421399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-11-29 16:01:08.423273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-11-29 16:01:08.423297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:57.179 [2024-11-29 16:01:08.423305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:18:57.179 [2024-11-29 16:01:08.423311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-11-29 16:01:08.437966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-11-29 16:01:08.437997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:57.179 [2024-11-29 16:01:08.438005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.643 ms 00:18:57.179 [2024-11-29 16:01:08.438015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-11-29 16:01:08.442800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-11-29 16:01:08.442822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:57.179 [2024-11-29 16:01:08.442830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.762 ms 00:18:57.179 [2024-11-29 16:01:08.442836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-11-29 16:01:08.460896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-11-29 16:01:08.460920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:57.179 [2024-11-29 16:01:08.460927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.028 ms 00:18:57.179 [2024-11-29 16:01:08.460933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-11-29 16:01:08.472077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-11-29 16:01:08.472184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:57.179 [2024-11-29 16:01:08.472198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.119 ms 00:18:57.179 [2024-11-29 16:01:08.472205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-11-29 16:01:08.472308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-11-29 16:01:08.472316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:57.179 [2024-11-29 16:01:08.472322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:57.179 [2024-11-29 16:01:08.472327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-11-29 16:01:08.490656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.180 [2024-11-29 16:01:08.490682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:57.180 [2024-11-29 16:01:08.490690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.318 ms 00:18:57.180 [2024-11-29 16:01:08.490696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.180 [2024-11-29 16:01:08.508487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.180 [2024-11-29 16:01:08.508516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:57.180 [2024-11-29 16:01:08.508524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.766 ms 00:18:57.180 [2024-11-29 16:01:08.508536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.180 [2024-11-29 16:01:08.525869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.180 [2024-11-29 16:01:08.525891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:57.180 [2024-11-29 16:01:08.525898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.309 ms 00:18:57.180 [2024-11-29 16:01:08.525904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.180 [2024-11-29 16:01:08.543249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.180 [2024-11-29 16:01:08.543272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:57.180 [2024-11-29 16:01:08.543279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.299 ms 00:18:57.180 [2024-11-29 16:01:08.543285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.180 [2024-11-29 16:01:08.543308] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:57.180 [2024-11-29 16:01:08.543318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:57.180 [2024-11-29 16:01:08.543757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:57.181 [2024-11-29 16:01:08.543895] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:57.181 [2024-11-29 16:01:08.543901] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2b5b9a76-0a17-4aa5-bab0-1e3efc4353d5 00:18:57.181 [2024-11-29 16:01:08.543907] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:57.181 [2024-11-29 16:01:08.543912] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:57.181 [2024-11-29 16:01:08.543917] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:57.181 [2024-11-29 16:01:08.543923] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:57.181 [2024-11-29 16:01:08.543928] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:57.181 [2024-11-29 16:01:08.543933] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:57.181 [2024-11-29 16:01:08.543938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:57.181 [2024-11-29 16:01:08.543943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:57.181 [2024-11-29 16:01:08.543952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:57.181 [2024-11-29 16:01:08.543957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.181 [2024-11-29 16:01:08.543962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:57.181 [2024-11-29 16:01:08.543969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:18:57.181 [2024-11-29 16:01:08.543990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.181 [2024-11-29 16:01:08.553468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.181 [2024-11-29 16:01:08.553562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:57.181 [2024-11-29 16:01:08.553573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.459 ms 00:18:57.181 [2024-11-29 16:01:08.553579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.181 [2024-11-29 16:01:08.553731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.181 [2024-11-29 16:01:08.553738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:57.181 [2024-11-29 16:01:08.553747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:18:57.181 [2024-11-29 16:01:08.553754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.181 [2024-11-29 16:01:08.581202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.181 [2024-11-29 16:01:08.581226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:57.181 [2024-11-29 16:01:08.581234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.181 [2024-11-29 16:01:08.581239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.181 [2024-11-29 16:01:08.581280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.181 [2024-11-29 16:01:08.581286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:57.181 [2024-11-29 16:01:08.581295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.181 [2024-11-29 16:01:08.581300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.181 [2024-11-29 16:01:08.581347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.181 [2024-11-29 16:01:08.581355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:57.181 [2024-11-29 16:01:08.581361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.181 [2024-11-29 16:01:08.581366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.181 [2024-11-29 16:01:08.581377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.181 [2024-11-29 16:01:08.581384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:57.181 [2024-11-29 16:01:08.581389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.181 [2024-11-29 16:01:08.581396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.442 [2024-11-29 16:01:08.638195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.442 [2024-11-29 16:01:08.638225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:57.442 [2024-11-29 16:01:08.638233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.442 [2024-11-29 16:01:08.638239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.442 [2024-11-29 16:01:08.660736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.442 [2024-11-29 16:01:08.660875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:57.442 [2024-11-29 16:01:08.660887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.442 [2024-11-29 16:01:08.660897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.442 [2024-11-29 16:01:08.660939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.442 [2024-11-29 16:01:08.660946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:57.442 [2024-11-29 16:01:08.660952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.442 [2024-11-29 16:01:08.660958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.442 [2024-11-29 16:01:08.661004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.442 [2024-11-29 16:01:08.661012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:57.442 [2024-11-29 16:01:08.661018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.442 [2024-11-29 16:01:08.661024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.442 [2024-11-29 16:01:08.661095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.442 [2024-11-29 16:01:08.661103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:57.442 [2024-11-29 16:01:08.661110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.442 [2024-11-29 16:01:08.661115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.442 [2024-11-29 16:01:08.661135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.442 [2024-11-29 16:01:08.661142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:57.442 [2024-11-29 16:01:08.661147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.442 [2024-11-29 16:01:08.661154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.442 [2024-11-29 16:01:08.661180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.442 [2024-11-29 16:01:08.661187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:57.442 [2024-11-29 16:01:08.661193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.442 [2024-11-29 16:01:08.661199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.442 [2024-11-29 16:01:08.661229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.442 [2024-11-29 16:01:08.661236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:57.442 [2024-11-29 16:01:08.661243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.442 [2024-11-29 16:01:08.661248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.442 [2024-11-29 16:01:08.661333] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 242.105 ms, result 0 00:18:58.824 00:18:58.824 00:18:58.824 16:01:09 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:58.824 [2024-11-29 16:01:09.979560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:58.824 [2024-11-29 16:01:09.979675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73609 ] 00:18:58.824 [2024-11-29 16:01:10.125962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.082 [2024-11-29 16:01:10.275511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.082 [2024-11-29 16:01:10.477878] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:59.082 [2024-11-29 16:01:10.477926] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:59.342 [2024-11-29 16:01:10.618069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.342 [2024-11-29 16:01:10.618101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:59.342 [2024-11-29 16:01:10.618111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:59.342 [2024-11-29 16:01:10.618119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.342 [2024-11-29 16:01:10.618151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.342 [2024-11-29 16:01:10.618159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:59.342 [2024-11-29 16:01:10.618165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:59.342 [2024-11-29 16:01:10.618170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.342 [2024-11-29 16:01:10.618182] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:59.342 [2024-11-29 16:01:10.618728] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:59.342 [2024-11-29 16:01:10.618739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.342 [2024-11-29 16:01:10.618745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:59.342 [2024-11-29 16:01:10.618751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:18:59.342 [2024-11-29 16:01:10.618757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.342 [2024-11-29 16:01:10.619767] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:59.342 [2024-11-29 16:01:10.629758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.342 [2024-11-29 16:01:10.629785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:59.343 [2024-11-29 16:01:10.629794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.992 ms 00:18:59.343 [2024-11-29 16:01:10.629800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.343 [2024-11-29 16:01:10.629839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.343 [2024-11-29 16:01:10.629846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:59.343 [2024-11-29 16:01:10.629852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:59.343 [2024-11-29 16:01:10.629858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.343 [2024-11-29 16:01:10.634127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.343 [2024-11-29 16:01:10.634151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:59.343 [2024-11-29 16:01:10.634158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.226 ms 00:18:59.343 [2024-11-29 16:01:10.634164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.343 [2024-11-29 16:01:10.634229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.343 [2024-11-29 16:01:10.634236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:59.343 [2024-11-29 16:01:10.634243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:59.343 [2024-11-29 16:01:10.634249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.343 [2024-11-29 16:01:10.634281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.343 [2024-11-29 16:01:10.634288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:59.343 [2024-11-29 16:01:10.634294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:59.343 [2024-11-29 16:01:10.634300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.343 [2024-11-29 16:01:10.634319] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:59.343 [2024-11-29 16:01:10.637036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.343 [2024-11-29 16:01:10.637057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:59.343 [2024-11-29 16:01:10.637064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:18:59.343 [2024-11-29 16:01:10.637070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.343 [2024-11-29 16:01:10.637094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.343 [2024-11-29 16:01:10.637100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:59.343 [2024-11-29 16:01:10.637106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:59.343 [2024-11-29 16:01:10.637113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.343 [2024-11-29 16:01:10.637128] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:59.343 [2024-11-29 16:01:10.637142] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:59.343 [2024-11-29 16:01:10.637167] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:59.343 [2024-11-29 16:01:10.637179] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:59.343 [2024-11-29 16:01:10.637235] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:59.343 [2024-11-29 16:01:10.637243] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:59.343 [2024-11-29 16:01:10.637253] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:59.343 [2024-11-29 16:01:10.637260] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:59.343 [2024-11-29 16:01:10.637266] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:59.343 [2024-11-29 16:01:10.637272] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:59.343 [2024-11-29 16:01:10.637277] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:59.343 [2024-11-29 16:01:10.637284] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:59.343 [2024-11-29 16:01:10.637290] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:59.343 [2024-11-29 16:01:10.637296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.343 [2024-11-29 16:01:10.637301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:59.343 [2024-11-29 16:01:10.637307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:18:59.343 [2024-11-29 16:01:10.637312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.343 [2024-11-29 16:01:10.637358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.343 [2024-11-29 16:01:10.637365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:59.343 [2024-11-29 16:01:10.637371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:59.343 [2024-11-29 16:01:10.637376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.343 [2024-11-29 16:01:10.637428] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:59.343 [2024-11-29 16:01:10.637436] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:59.343 [2024-11-29 16:01:10.637442] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:59.343 [2024-11-29 16:01:10.637448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.343 [2024-11-29 16:01:10.637454] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:59.343 [2024-11-29 16:01:10.637459] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:59.343 [2024-11-29 16:01:10.637465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:59.343 [2024-11-29 16:01:10.637470] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:59.343 [2024-11-29 16:01:10.637475] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:59.343 [2024-11-29 16:01:10.637480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:59.343 [2024-11-29 16:01:10.637485] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:59.343 [2024-11-29 16:01:10.637491] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:59.343 [2024-11-29 16:01:10.637496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:59.343 [2024-11-29 16:01:10.637501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:59.343 [2024-11-29 16:01:10.637506] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:59.343 [2024-11-29 16:01:10.637511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.343 [2024-11-29 16:01:10.637521] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:59.343 [2024-11-29 16:01:10.637526] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:59.343 [2024-11-29 16:01:10.637530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.343 [2024-11-29 16:01:10.637536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:59.343 [2024-11-29 16:01:10.637542] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:59.343 [2024-11-29 16:01:10.637547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:59.343 [2024-11-29 16:01:10.637553] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:59.343 [2024-11-29 16:01:10.637557] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:59.343 [2024-11-29 16:01:10.637562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:59.343 [2024-11-29 16:01:10.637567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:59.343 [2024-11-29 16:01:10.637572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:59.343 [2024-11-29 16:01:10.637576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:59.343 [2024-11-29 16:01:10.637582] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:59.343 [2024-11-29 16:01:10.637587] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:59.343 [2024-11-29 16:01:10.637591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:59.343 [2024-11-29 16:01:10.637596] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:59.343 [2024-11-29 16:01:10.637601] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:59.343 [2024-11-29 16:01:10.637606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:59.343 [2024-11-29 16:01:10.637610] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:59.343 [2024-11-29 16:01:10.637615] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:59.343 [2024-11-29 16:01:10.637620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:59.343 [2024-11-29 16:01:10.637625] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:59.343 [2024-11-29 16:01:10.637629] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:59.343 [2024-11-29 16:01:10.637634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:59.343 [2024-11-29 16:01:10.637639] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:59.343 [2024-11-29 16:01:10.637645] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:59.343 [2024-11-29 16:01:10.637652] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:59.343 [2024-11-29 16:01:10.637658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.343 [2024-11-29 16:01:10.637664] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:59.343 [2024-11-29 16:01:10.637669] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:59.343 [2024-11-29 16:01:10.637674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:59.343 [2024-11-29 16:01:10.637679] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:59.343 [2024-11-29 16:01:10.637683] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:59.343 [2024-11-29 16:01:10.637695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:59.343 [2024-11-29 16:01:10.637701] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:59.343 [2024-11-29 16:01:10.637707] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:59.343 [2024-11-29 16:01:10.637714] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:59.344 [2024-11-29 16:01:10.637720] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:59.344 [2024-11-29 16:01:10.637726] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:59.344 [2024-11-29 16:01:10.637731] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:59.344 [2024-11-29 16:01:10.637737] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:59.344 [2024-11-29 16:01:10.637742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:59.344 [2024-11-29 16:01:10.637748] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:59.344 [2024-11-29 16:01:10.637753] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:59.344 [2024-11-29 16:01:10.637758] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:59.344 [2024-11-29 16:01:10.637764] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:59.344 [2024-11-29 16:01:10.637769] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:59.344 [2024-11-29 16:01:10.637774] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:59.344 [2024-11-29 16:01:10.637780] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:59.344 [2024-11-29 16:01:10.637784] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:59.344 [2024-11-29 16:01:10.637790] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:59.344 [2024-11-29 16:01:10.637796] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:59.344 [2024-11-29 16:01:10.637801] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:59.344 [2024-11-29 16:01:10.637806] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:59.344 [2024-11-29 16:01:10.637812] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:59.344 [2024-11-29 16:01:10.637817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.344 [2024-11-29 16:01:10.637822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:59.344 [2024-11-29 16:01:10.637828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:18:59.344 [2024-11-29 16:01:10.637834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.344 [2024-11-29 16:01:10.649536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.344 [2024-11-29 16:01:10.649562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.344 [2024-11-29 16:01:10.649570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.673 ms 00:18:59.344 [2024-11-29 16:01:10.649579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.344 [2024-11-29 16:01:10.649643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.344 [2024-11-29 16:01:10.649649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:59.344 [2024-11-29 16:01:10.649655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:59.344 [2024-11-29 16:01:10.649660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.344 [2024-11-29 16:01:10.685247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.344 [2024-11-29 16:01:10.685279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.344 [2024-11-29 16:01:10.685289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.549 ms 00:18:59.344 [2024-11-29 16:01:10.685296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.344 [2024-11-29 16:01:10.685328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.344 [2024-11-29 16:01:10.685337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.344 [2024-11-29 16:01:10.685344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:59.344 [2024-11-29 16:01:10.685351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.344 [2024-11-29 16:01:10.685643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.344 [2024-11-29 16:01:10.685656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.344 [2024-11-29 16:01:10.685662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:18:59.344 [2024-11-29 16:01:10.685672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.344 [2024-11-29 16:01:10.685766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.344 [2024-11-29 16:01:10.685773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.344 [2024-11-29 16:01:10.685779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:59.344 [2024-11-29 16:01:10.685785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.344 [2024-11-29 16:01:10.696746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.344 [2024-11-29 16:01:10.696771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.344 [2024-11-29 16:01:10.696779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.945 ms 00:18:59.344 [2024-11-29 16:01:10.696785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.344 [2024-11-29 16:01:10.706853] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:59.344 [2024-11-29 16:01:10.707004] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:59.344 [2024-11-29 16:01:10.707016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.344 [2024-11-29 16:01:10.707023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:59.344 [2024-11-29 16:01:10.707030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.165 ms 00:18:59.344 [2024-11-29 16:01:10.707036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.344 [2024-11-29 16:01:10.725602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.344 [2024-11-29 16:01:10.725627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:59.344 [2024-11-29 16:01:10.725636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.541 ms 00:18:59.344 [2024-11-29 16:01:10.725643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.344 [2024-11-29 16:01:10.734849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.344 [2024-11-29 16:01:10.734871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:59.344 [2024-11-29 16:01:10.734878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.175 ms 00:18:59.344 [2024-11-29 16:01:10.734883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.344 [2024-11-29 16:01:10.744135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.344 [2024-11-29 16:01:10.744162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:59.344 [2024-11-29 16:01:10.744169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.227 ms 00:18:59.344 [2024-11-29 16:01:10.744175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.344 [2024-11-29 16:01:10.744431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.344 [2024-11-29 16:01:10.744440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:59.344 [2024-11-29 16:01:10.744447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:18:59.344 [2024-11-29 16:01:10.744452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.605 [2024-11-29 16:01:10.789150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.605 [2024-11-29 16:01:10.789277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:59.605 [2024-11-29 16:01:10.789291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.686 ms 00:18:59.605 [2024-11-29 16:01:10.789298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.605 [2024-11-29 16:01:10.797260] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:59.605 [2024-11-29 16:01:10.798920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.605 [2024-11-29 16:01:10.798943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:59.605 [2024-11-29 16:01:10.798951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.595 ms 00:18:59.605 [2024-11-29 16:01:10.798960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.606 [2024-11-29 16:01:10.799017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.606 [2024-11-29 16:01:10.799025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:59.606 [2024-11-29 16:01:10.799031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:59.606 [2024-11-29 16:01:10.799037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.606 [2024-11-29 16:01:10.799076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.606 [2024-11-29 16:01:10.799083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:59.606 [2024-11-29 16:01:10.799088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:59.606 [2024-11-29 16:01:10.799094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.606 [2024-11-29 16:01:10.800016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.606 [2024-11-29 16:01:10.800035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:59.606 [2024-11-29 16:01:10.800042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:18:59.606 [2024-11-29 16:01:10.800047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.606 [2024-11-29 16:01:10.800064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.606 [2024-11-29 16:01:10.800071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:59.606 [2024-11-29 16:01:10.800080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:59.606 [2024-11-29 16:01:10.800086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.606 [2024-11-29 16:01:10.800122] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:59.606 [2024-11-29 16:01:10.800129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.606 [2024-11-29 16:01:10.800137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:59.606 [2024-11-29 16:01:10.800143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:59.606 [2024-11-29 16:01:10.800148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.606 [2024-11-29 16:01:10.818471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.606 [2024-11-29 16:01:10.818499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:59.606 [2024-11-29 16:01:10.818508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.310 ms 00:18:59.606 [2024-11-29 16:01:10.818514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.606 [2024-11-29 16:01:10.818568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.606 [2024-11-29 16:01:10.818575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:59.606 [2024-11-29 16:01:10.818582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:59.606 [2024-11-29 16:01:10.818587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.606 [2024-11-29 16:01:10.819281] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 200.870 ms, result 0 00:19:00.573  [2024-11-29T16:01:12.983Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-29T16:01:14.369Z] Copying: 25/1024 [MB] (12 MBps) [2024-11-29T16:01:15.311Z] Copying: 43/1024 [MB] (18 MBps) [2024-11-29T16:01:16.255Z] Copying: 53/1024 [MB] (10 MBps) [2024-11-29T16:01:17.199Z] Copying: 71/1024 [MB] (17 MBps) [2024-11-29T16:01:18.141Z] Copying: 91/1024 [MB] (20 MBps) [2024-11-29T16:01:19.087Z] Copying: 113/1024 [MB] (21 MBps) [2024-11-29T16:01:20.031Z] Copying: 132/1024 [MB] (18 MBps) [2024-11-29T16:01:20.973Z] Copying: 156/1024 [MB] (23 MBps) [2024-11-29T16:01:22.359Z] Copying: 178/1024 [MB] (22 MBps) [2024-11-29T16:01:23.301Z] Copying: 191/1024 [MB] (12 MBps) [2024-11-29T16:01:24.244Z] Copying: 205/1024 [MB] (14 MBps) [2024-11-29T16:01:25.186Z] Copying: 223/1024 [MB] (17 MBps) [2024-11-29T16:01:26.125Z] Copying: 233/1024 [MB] (10 MBps) [2024-11-29T16:01:27.067Z] Copying: 248/1024 [MB] (15 MBps) [2024-11-29T16:01:28.009Z] Copying: 264/1024 [MB] (16 MBps) [2024-11-29T16:01:29.392Z] Copying: 282/1024 [MB] (17 MBps) [2024-11-29T16:01:29.972Z] Copying: 303/1024 [MB] (21 MBps) [2024-11-29T16:01:31.358Z] Copying: 327/1024 [MB] (23 MBps) [2024-11-29T16:01:32.300Z] Copying: 349/1024 [MB] (22 MBps) [2024-11-29T16:01:33.240Z] Copying: 371/1024 [MB] (21 MBps) [2024-11-29T16:01:34.180Z] Copying: 392/1024 [MB] (21 MBps) [2024-11-29T16:01:35.119Z] Copying: 407/1024 [MB] (15 MBps) [2024-11-29T16:01:36.060Z] Copying: 425/1024 [MB] (18 MBps) [2024-11-29T16:01:37.002Z] Copying: 441/1024 [MB] (15 MBps) [2024-11-29T16:01:38.386Z] Copying: 462/1024 [MB] (21 MBps) [2024-11-29T16:01:38.958Z] Copying: 475/1024 [MB] (13 MBps) [2024-11-29T16:01:40.343Z] Copying: 486/1024 [MB] (10 MBps) [2024-11-29T16:01:41.293Z] Copying: 496/1024 [MB] (10 MBps) [2024-11-29T16:01:42.257Z] Copying: 508/1024 [MB] (11 MBps) [2024-11-29T16:01:43.197Z] Copying: 519/1024 [MB] (10 MBps) [2024-11-29T16:01:44.139Z] Copying: 529/1024 [MB] (10 MBps) [2024-11-29T16:01:45.082Z] Copying: 539/1024 [MB] (10 MBps) [2024-11-29T16:01:46.027Z] Copying: 550/1024 [MB] (10 MBps) [2024-11-29T16:01:46.968Z] Copying: 581/1024 [MB] (31 MBps) [2024-11-29T16:01:48.353Z] Copying: 592/1024 [MB] (11 MBps) [2024-11-29T16:01:49.294Z] Copying: 603/1024 [MB] (10 MBps) [2024-11-29T16:01:50.237Z] Copying: 617/1024 [MB] (14 MBps) [2024-11-29T16:01:51.180Z] Copying: 636/1024 [MB] (18 MBps) [2024-11-29T16:01:52.121Z] Copying: 658/1024 [MB] (22 MBps) [2024-11-29T16:01:53.063Z] Copying: 674/1024 [MB] (15 MBps) [2024-11-29T16:01:54.006Z] Copying: 687/1024 [MB] (12 MBps) [2024-11-29T16:01:55.390Z] Copying: 700/1024 [MB] (12 MBps) [2024-11-29T16:01:55.961Z] Copying: 711/1024 [MB] (11 MBps) [2024-11-29T16:01:57.349Z] Copying: 722/1024 [MB] (11 MBps) [2024-11-29T16:01:58.293Z] Copying: 734/1024 [MB] (12 MBps) [2024-11-29T16:01:59.235Z] Copying: 745/1024 [MB] (10 MBps) [2024-11-29T16:02:00.177Z] Copying: 755/1024 [MB] (10 MBps) [2024-11-29T16:02:01.123Z] Copying: 771/1024 [MB] (15 MBps) [2024-11-29T16:02:02.064Z] Copying: 782/1024 [MB] (11 MBps) [2024-11-29T16:02:03.005Z] Copying: 793/1024 [MB] (10 MBps) [2024-11-29T16:02:04.390Z] Copying: 812/1024 [MB] (18 MBps) [2024-11-29T16:02:04.967Z] Copying: 823/1024 [MB] (11 MBps) [2024-11-29T16:02:06.347Z] Copying: 834/1024 [MB] (10 MBps) [2024-11-29T16:02:07.290Z] Copying: 847/1024 [MB] (13 MBps) [2024-11-29T16:02:08.230Z] Copying: 859/1024 [MB] (11 MBps) [2024-11-29T16:02:09.169Z] Copying: 870/1024 [MB] (11 MBps) [2024-11-29T16:02:10.160Z] Copying: 886/1024 [MB] (15 MBps) [2024-11-29T16:02:11.125Z] Copying: 897/1024 [MB] (10 MBps) [2024-11-29T16:02:12.065Z] Copying: 908/1024 [MB] (11 MBps) [2024-11-29T16:02:13.007Z] Copying: 920/1024 [MB] (11 MBps) [2024-11-29T16:02:14.395Z] Copying: 930/1024 [MB] (10 MBps) [2024-11-29T16:02:14.966Z] Copying: 941/1024 [MB] (11 MBps) [2024-11-29T16:02:16.350Z] Copying: 952/1024 [MB] (10 MBps) [2024-11-29T16:02:17.297Z] Copying: 963/1024 [MB] (10 MBps) [2024-11-29T16:02:18.243Z] Copying: 974/1024 [MB] (10 MBps) [2024-11-29T16:02:19.188Z] Copying: 986/1024 [MB] (12 MBps) [2024-11-29T16:02:20.132Z] Copying: 1002/1024 [MB] (16 MBps) [2024-11-29T16:02:21.076Z] Copying: 1014/1024 [MB] (12 MBps) [2024-11-29T16:02:21.340Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-29 16:02:21.089477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.909 [2024-11-29 16:02:21.089576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:09.909 [2024-11-29 16:02:21.089603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:09.909 [2024-11-29 16:02:21.089620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.909 [2024-11-29 16:02:21.089663] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:09.909 [2024-11-29 16:02:21.094944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.909 [2024-11-29 16:02:21.095033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:09.909 [2024-11-29 16:02:21.095052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.253 ms 00:20:09.909 [2024-11-29 16:02:21.095067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.909 [2024-11-29 16:02:21.095520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.909 [2024-11-29 16:02:21.095549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:09.909 [2024-11-29 16:02:21.095566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:20:09.909 [2024-11-29 16:02:21.095580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.909 [2024-11-29 16:02:21.104432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.909 [2024-11-29 16:02:21.104478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:09.909 [2024-11-29 16:02:21.104498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.822 ms 00:20:09.909 [2024-11-29 16:02:21.104507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.909 [2024-11-29 16:02:21.110629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.909 [2024-11-29 16:02:21.110671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:09.909 [2024-11-29 16:02:21.110683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.093 ms 00:20:09.909 [2024-11-29 16:02:21.110692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.909 [2024-11-29 16:02:21.138704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.910 [2024-11-29 16:02:21.138773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:09.910 [2024-11-29 16:02:21.138788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.924 ms 00:20:09.910 [2024-11-29 16:02:21.138796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.910 [2024-11-29 16:02:21.155534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.910 [2024-11-29 16:02:21.155584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:09.910 [2024-11-29 16:02:21.155598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.684 ms 00:20:09.910 [2024-11-29 16:02:21.155613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.910 [2024-11-29 16:02:21.155784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.910 [2024-11-29 16:02:21.155796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:09.910 [2024-11-29 16:02:21.155806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:20:09.910 [2024-11-29 16:02:21.155814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.910 [2024-11-29 16:02:21.182883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.910 [2024-11-29 16:02:21.182931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:09.910 [2024-11-29 16:02:21.182944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.052 ms 00:20:09.910 [2024-11-29 16:02:21.182951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.910 [2024-11-29 16:02:21.209116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.910 [2024-11-29 16:02:21.209330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:09.910 [2024-11-29 16:02:21.209368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.100 ms 00:20:09.910 [2024-11-29 16:02:21.209376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.910 [2024-11-29 16:02:21.235120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.910 [2024-11-29 16:02:21.235169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:09.910 [2024-11-29 16:02:21.235180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.702 ms 00:20:09.910 [2024-11-29 16:02:21.235188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.910 [2024-11-29 16:02:21.261189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.910 [2024-11-29 16:02:21.261238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:09.910 [2024-11-29 16:02:21.261250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.892 ms 00:20:09.910 [2024-11-29 16:02:21.261258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.910 [2024-11-29 16:02:21.261307] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:09.910 [2024-11-29 16:02:21.261333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:09.910 [2024-11-29 16:02:21.261878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.261886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.261894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.261902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.261909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.261917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.261925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.261932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.261940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.261948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.261955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.261962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.261991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:09.911 [2024-11-29 16:02:21.262176] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:09.911 [2024-11-29 16:02:21.262184] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2b5b9a76-0a17-4aa5-bab0-1e3efc4353d5 00:20:09.911 [2024-11-29 16:02:21.262192] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:09.911 [2024-11-29 16:02:21.262200] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:09.911 [2024-11-29 16:02:21.262207] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:09.911 [2024-11-29 16:02:21.262216] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:09.911 [2024-11-29 16:02:21.262223] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:09.911 [2024-11-29 16:02:21.262231] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:09.911 [2024-11-29 16:02:21.262240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:09.911 [2024-11-29 16:02:21.262255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:09.911 [2024-11-29 16:02:21.262262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:09.911 [2024-11-29 16:02:21.262269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.911 [2024-11-29 16:02:21.262277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:09.911 [2024-11-29 16:02:21.262288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:20:09.911 [2024-11-29 16:02:21.262297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.911 [2024-11-29 16:02:21.275788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.911 [2024-11-29 16:02:21.275831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:09.911 [2024-11-29 16:02:21.275843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.440 ms 00:20:09.911 [2024-11-29 16:02:21.275851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.911 [2024-11-29 16:02:21.276089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.911 [2024-11-29 16:02:21.276107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:09.911 [2024-11-29 16:02:21.276116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:20:09.911 [2024-11-29 16:02:21.276124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.911 [2024-11-29 16:02:21.315365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.911 [2024-11-29 16:02:21.315567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:09.911 [2024-11-29 16:02:21.315587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.911 [2024-11-29 16:02:21.315595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.911 [2024-11-29 16:02:21.315669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.911 [2024-11-29 16:02:21.315686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:09.911 [2024-11-29 16:02:21.315694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.911 [2024-11-29 16:02:21.315702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.911 [2024-11-29 16:02:21.315780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.911 [2024-11-29 16:02:21.315791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:09.911 [2024-11-29 16:02:21.315800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.911 [2024-11-29 16:02:21.315807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.911 [2024-11-29 16:02:21.315823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.911 [2024-11-29 16:02:21.315831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:09.911 [2024-11-29 16:02:21.315843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.911 [2024-11-29 16:02:21.315850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.173 [2024-11-29 16:02:21.396716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.173 [2024-11-29 16:02:21.396772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:10.173 [2024-11-29 16:02:21.396784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.173 [2024-11-29 16:02:21.396792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.173 [2024-11-29 16:02:21.429469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.173 [2024-11-29 16:02:21.429518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:10.173 [2024-11-29 16:02:21.429536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.173 [2024-11-29 16:02:21.429545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.173 [2024-11-29 16:02:21.429611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.173 [2024-11-29 16:02:21.429621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:10.173 [2024-11-29 16:02:21.429630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.173 [2024-11-29 16:02:21.429638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.173 [2024-11-29 16:02:21.429680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.173 [2024-11-29 16:02:21.429702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:10.173 [2024-11-29 16:02:21.429711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.173 [2024-11-29 16:02:21.429723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.173 [2024-11-29 16:02:21.429825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.173 [2024-11-29 16:02:21.429836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:10.173 [2024-11-29 16:02:21.429844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.173 [2024-11-29 16:02:21.429852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.173 [2024-11-29 16:02:21.429888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.173 [2024-11-29 16:02:21.429897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:10.173 [2024-11-29 16:02:21.429905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.173 [2024-11-29 16:02:21.429913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.173 [2024-11-29 16:02:21.429960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.173 [2024-11-29 16:02:21.429968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:10.173 [2024-11-29 16:02:21.430002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.173 [2024-11-29 16:02:21.430010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.173 [2024-11-29 16:02:21.430058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.173 [2024-11-29 16:02:21.430087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:10.173 [2024-11-29 16:02:21.430096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.173 [2024-11-29 16:02:21.430107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.173 [2024-11-29 16:02:21.430237] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.731 ms, result 0 00:20:11.119 00:20:11.119 00:20:11.119 16:02:22 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:13.672 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:13.672 16:02:24 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:13.672 [2024-11-29 16:02:24.654678] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:13.672 [2024-11-29 16:02:24.654820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74392 ] 00:20:13.672 [2024-11-29 16:02:24.807234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.672 [2024-11-29 16:02:25.026649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.930 [2024-11-29 16:02:25.316035] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.930 [2024-11-29 16:02:25.316315] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.191 [2024-11-29 16:02:25.471453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.191 [2024-11-29 16:02:25.471517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:14.191 [2024-11-29 16:02:25.471533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:14.191 [2024-11-29 16:02:25.471545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.191 [2024-11-29 16:02:25.471601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.191 [2024-11-29 16:02:25.471611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.191 [2024-11-29 16:02:25.471620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:14.191 [2024-11-29 16:02:25.471628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.191 [2024-11-29 16:02:25.471649] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:14.191 [2024-11-29 16:02:25.472456] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:14.191 [2024-11-29 16:02:25.472479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.191 [2024-11-29 16:02:25.472488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.191 [2024-11-29 16:02:25.472496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:20:14.191 [2024-11-29 16:02:25.472505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.191 [2024-11-29 16:02:25.474274] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:14.191 [2024-11-29 16:02:25.488636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.191 [2024-11-29 16:02:25.488691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:14.191 [2024-11-29 16:02:25.488706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.365 ms 00:20:14.191 [2024-11-29 16:02:25.488714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.191 [2024-11-29 16:02:25.488797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.192 [2024-11-29 16:02:25.488807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:14.192 [2024-11-29 16:02:25.488816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:14.192 [2024-11-29 16:02:25.488824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.192 [2024-11-29 16:02:25.497409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.192 [2024-11-29 16:02:25.497456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.192 [2024-11-29 16:02:25.497466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.502 ms 00:20:14.192 [2024-11-29 16:02:25.497474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.192 [2024-11-29 16:02:25.497574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.192 [2024-11-29 16:02:25.497584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.192 [2024-11-29 16:02:25.497593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:14.192 [2024-11-29 16:02:25.497602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.192 [2024-11-29 16:02:25.497649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.192 [2024-11-29 16:02:25.497659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:14.192 [2024-11-29 16:02:25.497667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:14.192 [2024-11-29 16:02:25.497675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.192 [2024-11-29 16:02:25.497722] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.192 [2024-11-29 16:02:25.502052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.192 [2024-11-29 16:02:25.502093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.192 [2024-11-29 16:02:25.502104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.344 ms 00:20:14.192 [2024-11-29 16:02:25.502111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.192 [2024-11-29 16:02:25.502152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.192 [2024-11-29 16:02:25.502160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:14.192 [2024-11-29 16:02:25.502168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:14.192 [2024-11-29 16:02:25.502179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.192 [2024-11-29 16:02:25.502231] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:14.192 [2024-11-29 16:02:25.502254] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:14.192 [2024-11-29 16:02:25.502290] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:14.192 [2024-11-29 16:02:25.502305] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:14.192 [2024-11-29 16:02:25.502383] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:14.192 [2024-11-29 16:02:25.502393] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:14.192 [2024-11-29 16:02:25.502406] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:14.192 [2024-11-29 16:02:25.502417] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:14.192 [2024-11-29 16:02:25.502426] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:14.192 [2024-11-29 16:02:25.502434] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:14.192 [2024-11-29 16:02:25.502442] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:14.192 [2024-11-29 16:02:25.502450] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:14.192 [2024-11-29 16:02:25.502457] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:14.192 [2024-11-29 16:02:25.502465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.192 [2024-11-29 16:02:25.502473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:14.192 [2024-11-29 16:02:25.502481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:20:14.192 [2024-11-29 16:02:25.502488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.192 [2024-11-29 16:02:25.502550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.192 [2024-11-29 16:02:25.502559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:14.192 [2024-11-29 16:02:25.502566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:14.192 [2024-11-29 16:02:25.502573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.192 [2024-11-29 16:02:25.502643] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:14.192 [2024-11-29 16:02:25.502653] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:14.192 [2024-11-29 16:02:25.502661] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.192 [2024-11-29 16:02:25.502670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.192 [2024-11-29 16:02:25.502677] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:14.192 [2024-11-29 16:02:25.502684] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:14.192 [2024-11-29 16:02:25.502690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:14.192 [2024-11-29 16:02:25.502699] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:14.192 [2024-11-29 16:02:25.502706] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:14.192 [2024-11-29 16:02:25.502713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.192 [2024-11-29 16:02:25.502720] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:14.192 [2024-11-29 16:02:25.502728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:14.192 [2024-11-29 16:02:25.502737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.192 [2024-11-29 16:02:25.502745] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:14.192 [2024-11-29 16:02:25.502751] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:14.192 [2024-11-29 16:02:25.502759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.192 [2024-11-29 16:02:25.502773] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:14.192 [2024-11-29 16:02:25.502779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:14.192 [2024-11-29 16:02:25.502786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.192 [2024-11-29 16:02:25.502793] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:14.192 [2024-11-29 16:02:25.502800] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:14.192 [2024-11-29 16:02:25.502807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:14.192 [2024-11-29 16:02:25.502813] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:14.192 [2024-11-29 16:02:25.502821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:14.192 [2024-11-29 16:02:25.502827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:14.192 [2024-11-29 16:02:25.502833] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:14.192 [2024-11-29 16:02:25.502840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:14.192 [2024-11-29 16:02:25.502846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:14.192 [2024-11-29 16:02:25.502853] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:14.192 [2024-11-29 16:02:25.502859] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:14.192 [2024-11-29 16:02:25.502865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:14.192 [2024-11-29 16:02:25.502872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:14.192 [2024-11-29 16:02:25.502878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:14.192 [2024-11-29 16:02:25.502885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:14.192 [2024-11-29 16:02:25.502891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:14.192 [2024-11-29 16:02:25.502897] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:14.192 [2024-11-29 16:02:25.502904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.192 [2024-11-29 16:02:25.502910] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:14.192 [2024-11-29 16:02:25.502917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:14.192 [2024-11-29 16:02:25.502923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.192 [2024-11-29 16:02:25.502929] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:14.192 [2024-11-29 16:02:25.502939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:14.192 [2024-11-29 16:02:25.502947] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.192 [2024-11-29 16:02:25.502955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.192 [2024-11-29 16:02:25.502964] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:14.192 [2024-11-29 16:02:25.503005] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:14.192 [2024-11-29 16:02:25.503013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:14.192 [2024-11-29 16:02:25.503020] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:14.192 [2024-11-29 16:02:25.503027] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:14.192 [2024-11-29 16:02:25.503034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:14.192 [2024-11-29 16:02:25.503043] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:14.192 [2024-11-29 16:02:25.503053] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.192 [2024-11-29 16:02:25.503063] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:14.192 [2024-11-29 16:02:25.503070] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:14.192 [2024-11-29 16:02:25.503078] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:14.192 [2024-11-29 16:02:25.503086] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:14.193 [2024-11-29 16:02:25.503094] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:14.193 [2024-11-29 16:02:25.503101] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:14.193 [2024-11-29 16:02:25.503108] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:14.193 [2024-11-29 16:02:25.503116] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:14.193 [2024-11-29 16:02:25.503124] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:14.193 [2024-11-29 16:02:25.503131] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:14.193 [2024-11-29 16:02:25.503138] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:14.193 [2024-11-29 16:02:25.503145] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:14.193 [2024-11-29 16:02:25.503153] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:14.193 [2024-11-29 16:02:25.503160] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:14.193 [2024-11-29 16:02:25.503169] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.193 [2024-11-29 16:02:25.503179] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:14.193 [2024-11-29 16:02:25.503186] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:14.193 [2024-11-29 16:02:25.503193] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:14.193 [2024-11-29 16:02:25.503200] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:14.193 [2024-11-29 16:02:25.503208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.193 [2024-11-29 16:02:25.503217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:14.193 [2024-11-29 16:02:25.503224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:20:14.193 [2024-11-29 16:02:25.503231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.193 [2024-11-29 16:02:25.522362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.193 [2024-11-29 16:02:25.522415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.193 [2024-11-29 16:02:25.522428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.088 ms 00:20:14.193 [2024-11-29 16:02:25.522442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.193 [2024-11-29 16:02:25.522539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.193 [2024-11-29 16:02:25.522547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:14.193 [2024-11-29 16:02:25.522556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:14.193 [2024-11-29 16:02:25.522564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.193 [2024-11-29 16:02:25.570850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.193 [2024-11-29 16:02:25.570914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.193 [2024-11-29 16:02:25.570927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.230 ms 00:20:14.193 [2024-11-29 16:02:25.570936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.193 [2024-11-29 16:02:25.571007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.193 [2024-11-29 16:02:25.571018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.193 [2024-11-29 16:02:25.571027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:14.193 [2024-11-29 16:02:25.571035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.193 [2024-11-29 16:02:25.571610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.193 [2024-11-29 16:02:25.571656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.193 [2024-11-29 16:02:25.571667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:20:14.193 [2024-11-29 16:02:25.571681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.193 [2024-11-29 16:02:25.571814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.193 [2024-11-29 16:02:25.571823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.193 [2024-11-29 16:02:25.571832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:20:14.193 [2024-11-29 16:02:25.571840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.193 [2024-11-29 16:02:25.588681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.193 [2024-11-29 16:02:25.588728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.193 [2024-11-29 16:02:25.588739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.815 ms 00:20:14.193 [2024-11-29 16:02:25.588748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.193 [2024-11-29 16:02:25.603425] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:14.193 [2024-11-29 16:02:25.603475] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:14.193 [2024-11-29 16:02:25.603488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.193 [2024-11-29 16:02:25.603496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:14.193 [2024-11-29 16:02:25.603506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.629 ms 00:20:14.193 [2024-11-29 16:02:25.603513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.455 [2024-11-29 16:02:25.629698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.455 [2024-11-29 16:02:25.629754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:14.455 [2024-11-29 16:02:25.629767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.118 ms 00:20:14.455 [2024-11-29 16:02:25.629775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.455 [2024-11-29 16:02:25.642850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.455 [2024-11-29 16:02:25.642901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:14.455 [2024-11-29 16:02:25.642912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.007 ms 00:20:14.455 [2024-11-29 16:02:25.642920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.455 [2024-11-29 16:02:25.656164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.455 [2024-11-29 16:02:25.656220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:14.455 [2024-11-29 16:02:25.656233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.179 ms 00:20:14.455 [2024-11-29 16:02:25.656240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.455 [2024-11-29 16:02:25.656639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.455 [2024-11-29 16:02:25.656651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:14.455 [2024-11-29 16:02:25.656661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:14.455 [2024-11-29 16:02:25.656668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.455 [2024-11-29 16:02:25.723446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.455 [2024-11-29 16:02:25.723509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:14.455 [2024-11-29 16:02:25.723524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.758 ms 00:20:14.455 [2024-11-29 16:02:25.723534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.455 [2024-11-29 16:02:25.735159] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:14.455 [2024-11-29 16:02:25.738361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.455 [2024-11-29 16:02:25.738410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:14.455 [2024-11-29 16:02:25.738422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.764 ms 00:20:14.455 [2024-11-29 16:02:25.738438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.455 [2024-11-29 16:02:25.738517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.455 [2024-11-29 16:02:25.738528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:14.455 [2024-11-29 16:02:25.738537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:14.455 [2024-11-29 16:02:25.738545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.455 [2024-11-29 16:02:25.738613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.455 [2024-11-29 16:02:25.738624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:14.455 [2024-11-29 16:02:25.738632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:14.455 [2024-11-29 16:02:25.738641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.455 [2024-11-29 16:02:25.740052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.455 [2024-11-29 16:02:25.740093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:14.455 [2024-11-29 16:02:25.740104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:20:14.455 [2024-11-29 16:02:25.740112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.455 [2024-11-29 16:02:25.740150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.455 [2024-11-29 16:02:25.740158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:14.455 [2024-11-29 16:02:25.740173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:14.455 [2024-11-29 16:02:25.740181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.455 [2024-11-29 16:02:25.740218] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:14.455 [2024-11-29 16:02:25.740228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.455 [2024-11-29 16:02:25.740239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:14.455 [2024-11-29 16:02:25.740247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:14.455 [2024-11-29 16:02:25.740255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.455 [2024-11-29 16:02:25.766749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.455 [2024-11-29 16:02:25.766801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.455 [2024-11-29 16:02:25.766815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.475 ms 00:20:14.455 [2024-11-29 16:02:25.766823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.455 [2024-11-29 16:02:25.766919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.455 [2024-11-29 16:02:25.766929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.455 [2024-11-29 16:02:25.766938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:14.455 [2024-11-29 16:02:25.766946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.455 [2024-11-29 16:02:25.768237] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 296.262 ms, result 0 00:20:15.401  [2024-11-29T16:02:28.218Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-29T16:02:28.801Z] Copying: 29/1024 [MB] (14 MBps) [2024-11-29T16:02:30.183Z] Copying: 59/1024 [MB] (29 MBps) [2024-11-29T16:02:31.125Z] Copying: 80/1024 [MB] (21 MBps) [2024-11-29T16:02:32.070Z] Copying: 102/1024 [MB] (21 MBps) [2024-11-29T16:02:33.014Z] Copying: 123/1024 [MB] (21 MBps) [2024-11-29T16:02:33.959Z] Copying: 137/1024 [MB] (14 MBps) [2024-11-29T16:02:34.907Z] Copying: 149/1024 [MB] (11 MBps) [2024-11-29T16:02:35.852Z] Copying: 161/1024 [MB] (11 MBps) [2024-11-29T16:02:36.792Z] Copying: 171/1024 [MB] (10 MBps) [2024-11-29T16:02:38.177Z] Copying: 198/1024 [MB] (26 MBps) [2024-11-29T16:02:39.147Z] Copying: 214/1024 [MB] (16 MBps) [2024-11-29T16:02:40.092Z] Copying: 233/1024 [MB] (19 MBps) [2024-11-29T16:02:41.036Z] Copying: 256/1024 [MB] (22 MBps) [2024-11-29T16:02:41.981Z] Copying: 274/1024 [MB] (17 MBps) [2024-11-29T16:02:42.925Z] Copying: 287/1024 [MB] (13 MBps) [2024-11-29T16:02:43.868Z] Copying: 302/1024 [MB] (15 MBps) [2024-11-29T16:02:44.811Z] Copying: 316/1024 [MB] (13 MBps) [2024-11-29T16:02:46.198Z] Copying: 327/1024 [MB] (10 MBps) [2024-11-29T16:02:47.142Z] Copying: 343/1024 [MB] (15 MBps) [2024-11-29T16:02:48.086Z] Copying: 359/1024 [MB] (16 MBps) [2024-11-29T16:02:49.033Z] Copying: 377/1024 [MB] (17 MBps) [2024-11-29T16:02:49.979Z] Copying: 390/1024 [MB] (12 MBps) [2024-11-29T16:02:50.923Z] Copying: 405/1024 [MB] (14 MBps) [2024-11-29T16:02:51.867Z] Copying: 418/1024 [MB] (13 MBps) [2024-11-29T16:02:52.813Z] Copying: 436/1024 [MB] (17 MBps) [2024-11-29T16:02:54.202Z] Copying: 453/1024 [MB] (17 MBps) [2024-11-29T16:02:55.147Z] Copying: 467/1024 [MB] (14 MBps) [2024-11-29T16:02:56.092Z] Copying: 478/1024 [MB] (11 MBps) [2024-11-29T16:02:57.037Z] Copying: 491/1024 [MB] (12 MBps) [2024-11-29T16:02:57.983Z] Copying: 501/1024 [MB] (10 MBps) [2024-11-29T16:02:58.929Z] Copying: 512/1024 [MB] (10 MBps) [2024-11-29T16:02:59.874Z] Copying: 534792/1048576 [kB] (10200 kBps) [2024-11-29T16:03:00.819Z] Copying: 532/1024 [MB] (10 MBps) [2024-11-29T16:03:02.199Z] Copying: 543/1024 [MB] (10 MBps) [2024-11-29T16:03:03.144Z] Copying: 569/1024 [MB] (26 MBps) [2024-11-29T16:03:04.089Z] Copying: 581/1024 [MB] (12 MBps) [2024-11-29T16:03:05.033Z] Copying: 594/1024 [MB] (12 MBps) [2024-11-29T16:03:05.974Z] Copying: 612/1024 [MB] (17 MBps) [2024-11-29T16:03:06.920Z] Copying: 629/1024 [MB] (17 MBps) [2024-11-29T16:03:07.870Z] Copying: 646/1024 [MB] (16 MBps) [2024-11-29T16:03:08.877Z] Copying: 659/1024 [MB] (12 MBps) [2024-11-29T16:03:09.823Z] Copying: 674/1024 [MB] (15 MBps) [2024-11-29T16:03:11.210Z] Copying: 694/1024 [MB] (20 MBps) [2024-11-29T16:03:11.783Z] Copying: 706/1024 [MB] (11 MBps) [2024-11-29T16:03:13.170Z] Copying: 717/1024 [MB] (10 MBps) [2024-11-29T16:03:14.114Z] Copying: 727/1024 [MB] (10 MBps) [2024-11-29T16:03:15.063Z] Copying: 741/1024 [MB] (13 MBps) [2024-11-29T16:03:15.997Z] Copying: 759/1024 [MB] (18 MBps) [2024-11-29T16:03:16.940Z] Copying: 811/1024 [MB] (51 MBps) [2024-11-29T16:03:17.880Z] Copying: 833/1024 [MB] (21 MBps) [2024-11-29T16:03:18.822Z] Copying: 851/1024 [MB] (17 MBps) [2024-11-29T16:03:20.207Z] Copying: 869/1024 [MB] (18 MBps) [2024-11-29T16:03:21.154Z] Copying: 900480/1048576 [kB] (10176 kBps) [2024-11-29T16:03:22.096Z] Copying: 910592/1048576 [kB] (10112 kBps) [2024-11-29T16:03:23.039Z] Copying: 900/1024 [MB] (11 MBps) [2024-11-29T16:03:23.983Z] Copying: 911/1024 [MB] (10 MBps) [2024-11-29T16:03:24.928Z] Copying: 943816/1048576 [kB] (10184 kBps) [2024-11-29T16:03:25.870Z] Copying: 953940/1048576 [kB] (10124 kBps) [2024-11-29T16:03:26.806Z] Copying: 943/1024 [MB] (12 MBps) [2024-11-29T16:03:28.187Z] Copying: 972/1024 [MB] (28 MBps) [2024-11-29T16:03:29.134Z] Copying: 996/1024 [MB] (23 MBps) [2024-11-29T16:03:30.080Z] Copying: 1014/1024 [MB] (18 MBps) [2024-11-29T16:03:30.080Z] Copying: 1048304/1048576 [kB] (9188 kBps) [2024-11-29T16:03:30.080Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-29 16:03:30.012991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.649 [2024-11-29 16:03:30.013080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:18.649 [2024-11-29 16:03:30.013098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:18.649 [2024-11-29 16:03:30.013107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.649 [2024-11-29 16:03:30.014692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:18.649 [2024-11-29 16:03:30.018539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.649 [2024-11-29 16:03:30.018591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:18.649 [2024-11-29 16:03:30.018605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.793 ms 00:21:18.649 [2024-11-29 16:03:30.018613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.649 [2024-11-29 16:03:30.030890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.649 [2024-11-29 16:03:30.030942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:18.649 [2024-11-29 16:03:30.030964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.001 ms 00:21:18.649 [2024-11-29 16:03:30.030993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.649 [2024-11-29 16:03:30.054811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.649 [2024-11-29 16:03:30.054862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:18.649 [2024-11-29 16:03:30.054874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.797 ms 00:21:18.649 [2024-11-29 16:03:30.054882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.649 [2024-11-29 16:03:30.061019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.649 [2024-11-29 16:03:30.061231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:18.649 [2024-11-29 16:03:30.061255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.108 ms 00:21:18.649 [2024-11-29 16:03:30.061276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.910 [2024-11-29 16:03:30.088718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.911 [2024-11-29 16:03:30.088918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:18.911 [2024-11-29 16:03:30.088941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.378 ms 00:21:18.911 [2024-11-29 16:03:30.088949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.911 [2024-11-29 16:03:30.106268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.911 [2024-11-29 16:03:30.106318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:18.911 [2024-11-29 16:03:30.106330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.220 ms 00:21:18.911 [2024-11-29 16:03:30.106339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.173 [2024-11-29 16:03:30.343070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.173 [2024-11-29 16:03:30.343265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:19.173 [2024-11-29 16:03:30.343288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 236.675 ms 00:21:19.173 [2024-11-29 16:03:30.343298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.173 [2024-11-29 16:03:30.370348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.173 [2024-11-29 16:03:30.370535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:19.173 [2024-11-29 16:03:30.370556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.019 ms 00:21:19.173 [2024-11-29 16:03:30.370564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.173 [2024-11-29 16:03:30.397427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.173 [2024-11-29 16:03:30.397636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:19.174 [2024-11-29 16:03:30.397672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.773 ms 00:21:19.174 [2024-11-29 16:03:30.397679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.174 [2024-11-29 16:03:30.423679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.174 [2024-11-29 16:03:30.423732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:19.174 [2024-11-29 16:03:30.423744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.851 ms 00:21:19.174 [2024-11-29 16:03:30.423751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.174 [2024-11-29 16:03:30.449851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.174 [2024-11-29 16:03:30.449902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:19.174 [2024-11-29 16:03:30.449914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.991 ms 00:21:19.174 [2024-11-29 16:03:30.449921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.174 [2024-11-29 16:03:30.449990] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:19.174 [2024-11-29 16:03:30.450006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 89856 / 261120 wr_cnt: 1 state: open 00:21:19.174 [2024-11-29 16:03:30.450017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:19.174 [2024-11-29 16:03:30.450633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:19.175 [2024-11-29 16:03:30.450818] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:19.175 [2024-11-29 16:03:30.450826] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2b5b9a76-0a17-4aa5-bab0-1e3efc4353d5 00:21:19.175 [2024-11-29 16:03:30.450835] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 89856 00:21:19.175 [2024-11-29 16:03:30.450843] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 90816 00:21:19.175 [2024-11-29 16:03:30.450850] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 89856 00:21:19.175 [2024-11-29 16:03:30.450866] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0107 00:21:19.175 [2024-11-29 16:03:30.450874] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:19.175 [2024-11-29 16:03:30.450886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:19.175 [2024-11-29 16:03:30.450894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:19.175 [2024-11-29 16:03:30.450907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:19.175 [2024-11-29 16:03:30.450914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:19.175 [2024-11-29 16:03:30.450922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.175 [2024-11-29 16:03:30.450930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:19.175 [2024-11-29 16:03:30.450939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:21:19.175 [2024-11-29 16:03:30.450947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.175 [2024-11-29 16:03:30.464609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.175 [2024-11-29 16:03:30.464663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:19.175 [2024-11-29 16:03:30.464674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.597 ms 00:21:19.175 [2024-11-29 16:03:30.464682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.175 [2024-11-29 16:03:30.464900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.175 [2024-11-29 16:03:30.464910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:19.175 [2024-11-29 16:03:30.464918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:21:19.175 [2024-11-29 16:03:30.464926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.175 [2024-11-29 16:03:30.504640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.175 [2024-11-29 16:03:30.504696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:19.175 [2024-11-29 16:03:30.504707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.175 [2024-11-29 16:03:30.504716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.175 [2024-11-29 16:03:30.504781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.175 [2024-11-29 16:03:30.504790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:19.175 [2024-11-29 16:03:30.504799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.175 [2024-11-29 16:03:30.504808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.175 [2024-11-29 16:03:30.504893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.175 [2024-11-29 16:03:30.504911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:19.175 [2024-11-29 16:03:30.504919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.175 [2024-11-29 16:03:30.504927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.175 [2024-11-29 16:03:30.504942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.175 [2024-11-29 16:03:30.504951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:19.175 [2024-11-29 16:03:30.504959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.175 [2024-11-29 16:03:30.504966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.175 [2024-11-29 16:03:30.586519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.175 [2024-11-29 16:03:30.586559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:19.175 [2024-11-29 16:03:30.586569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.175 [2024-11-29 16:03:30.586576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.437 [2024-11-29 16:03:30.616068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.437 [2024-11-29 16:03:30.616105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:19.437 [2024-11-29 16:03:30.616114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.437 [2024-11-29 16:03:30.616122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.437 [2024-11-29 16:03:30.616179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.437 [2024-11-29 16:03:30.616188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:19.437 [2024-11-29 16:03:30.616200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.437 [2024-11-29 16:03:30.616208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.437 [2024-11-29 16:03:30.616246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.437 [2024-11-29 16:03:30.616254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:19.437 [2024-11-29 16:03:30.616262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.437 [2024-11-29 16:03:30.616269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.437 [2024-11-29 16:03:30.616356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.437 [2024-11-29 16:03:30.616366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:19.437 [2024-11-29 16:03:30.616374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.437 [2024-11-29 16:03:30.616384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.437 [2024-11-29 16:03:30.616410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.437 [2024-11-29 16:03:30.616419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:19.437 [2024-11-29 16:03:30.616427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.437 [2024-11-29 16:03:30.616434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.437 [2024-11-29 16:03:30.616468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.437 [2024-11-29 16:03:30.616476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:19.437 [2024-11-29 16:03:30.616483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.437 [2024-11-29 16:03:30.616492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.437 [2024-11-29 16:03:30.616531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.437 [2024-11-29 16:03:30.616540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:19.437 [2024-11-29 16:03:30.616548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.437 [2024-11-29 16:03:30.616555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.437 [2024-11-29 16:03:30.616665] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 604.085 ms, result 0 00:21:20.826 00:21:20.826 00:21:20.826 16:03:32 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:20.826 [2024-11-29 16:03:32.206569] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:20.826 [2024-11-29 16:03:32.206955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75093 ] 00:21:21.087 [2024-11-29 16:03:32.358865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.349 [2024-11-29 16:03:32.579781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.611 [2024-11-29 16:03:32.869806] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:21.611 [2024-11-29 16:03:32.869881] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:21.611 [2024-11-29 16:03:33.025522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.611 [2024-11-29 16:03:33.025773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:21.611 [2024-11-29 16:03:33.025799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:21.611 [2024-11-29 16:03:33.025812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.611 [2024-11-29 16:03:33.025882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.611 [2024-11-29 16:03:33.025893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:21.611 [2024-11-29 16:03:33.025902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:21:21.611 [2024-11-29 16:03:33.025910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.611 [2024-11-29 16:03:33.025932] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:21.611 [2024-11-29 16:03:33.026723] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:21.612 [2024-11-29 16:03:33.026743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.612 [2024-11-29 16:03:33.026752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:21.612 [2024-11-29 16:03:33.026761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:21:21.612 [2024-11-29 16:03:33.026769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.612 [2024-11-29 16:03:33.028510] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:21.874 [2024-11-29 16:03:33.043229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.874 [2024-11-29 16:03:33.043303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:21.874 [2024-11-29 16:03:33.043324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.722 ms 00:21:21.874 [2024-11-29 16:03:33.043333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.874 [2024-11-29 16:03:33.043441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.874 [2024-11-29 16:03:33.043451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:21.874 [2024-11-29 16:03:33.043461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:21.874 [2024-11-29 16:03:33.043468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.874 [2024-11-29 16:03:33.051881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.874 [2024-11-29 16:03:33.052099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:21.874 [2024-11-29 16:03:33.052119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.326 ms 00:21:21.874 [2024-11-29 16:03:33.052128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.874 [2024-11-29 16:03:33.052230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.874 [2024-11-29 16:03:33.052240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:21.874 [2024-11-29 16:03:33.052250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:21.874 [2024-11-29 16:03:33.052258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.874 [2024-11-29 16:03:33.052304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.874 [2024-11-29 16:03:33.052314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:21.874 [2024-11-29 16:03:33.052322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:21.874 [2024-11-29 16:03:33.052329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.874 [2024-11-29 16:03:33.052360] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:21.874 [2024-11-29 16:03:33.056550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.874 [2024-11-29 16:03:33.056590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:21.874 [2024-11-29 16:03:33.056600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.203 ms 00:21:21.874 [2024-11-29 16:03:33.056608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.874 [2024-11-29 16:03:33.056650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.874 [2024-11-29 16:03:33.056658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:21.874 [2024-11-29 16:03:33.056667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:21.875 [2024-11-29 16:03:33.056677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.875 [2024-11-29 16:03:33.056732] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:21.875 [2024-11-29 16:03:33.056755] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:21.875 [2024-11-29 16:03:33.056790] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:21.875 [2024-11-29 16:03:33.056806] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:21.875 [2024-11-29 16:03:33.056882] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:21.875 [2024-11-29 16:03:33.056893] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:21.875 [2024-11-29 16:03:33.056906] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:21.875 [2024-11-29 16:03:33.056916] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:21.875 [2024-11-29 16:03:33.056926] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:21.875 [2024-11-29 16:03:33.056935] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:21.875 [2024-11-29 16:03:33.056943] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:21.875 [2024-11-29 16:03:33.056951] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:21.875 [2024-11-29 16:03:33.056958] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:21.875 [2024-11-29 16:03:33.056966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.875 [2024-11-29 16:03:33.056995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:21.875 [2024-11-29 16:03:33.057003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:21:21.875 [2024-11-29 16:03:33.057011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.875 [2024-11-29 16:03:33.057075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.875 [2024-11-29 16:03:33.057084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:21.875 [2024-11-29 16:03:33.057092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:21.875 [2024-11-29 16:03:33.057100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.875 [2024-11-29 16:03:33.057173] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:21.875 [2024-11-29 16:03:33.057183] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:21.875 [2024-11-29 16:03:33.057192] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:21.875 [2024-11-29 16:03:33.057200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:21.875 [2024-11-29 16:03:33.057208] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:21.875 [2024-11-29 16:03:33.057215] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:21.875 [2024-11-29 16:03:33.057222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:21.875 [2024-11-29 16:03:33.057392] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:21.875 [2024-11-29 16:03:33.057399] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:21.875 [2024-11-29 16:03:33.057406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:21.875 [2024-11-29 16:03:33.057413] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:21.875 [2024-11-29 16:03:33.057420] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:21.875 [2024-11-29 16:03:33.057426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:21.875 [2024-11-29 16:03:33.057433] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:21.875 [2024-11-29 16:03:33.057439] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:21.875 [2024-11-29 16:03:33.057448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:21.875 [2024-11-29 16:03:33.057463] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:21.875 [2024-11-29 16:03:33.057470] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:21.875 [2024-11-29 16:03:33.057477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:21.875 [2024-11-29 16:03:33.057483] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:21.875 [2024-11-29 16:03:33.057492] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:21.875 [2024-11-29 16:03:33.057499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:21.875 [2024-11-29 16:03:33.057506] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:21.875 [2024-11-29 16:03:33.057513] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:21.875 [2024-11-29 16:03:33.057519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:21.875 [2024-11-29 16:03:33.057526] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:21.875 [2024-11-29 16:03:33.057532] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:21.875 [2024-11-29 16:03:33.057539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:21.875 [2024-11-29 16:03:33.057546] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:21.875 [2024-11-29 16:03:33.057552] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:21.875 [2024-11-29 16:03:33.057559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:21.875 [2024-11-29 16:03:33.057565] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:21.875 [2024-11-29 16:03:33.057572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:21.875 [2024-11-29 16:03:33.057578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:21.875 [2024-11-29 16:03:33.057584] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:21.875 [2024-11-29 16:03:33.057590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:21.875 [2024-11-29 16:03:33.057597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:21.875 [2024-11-29 16:03:33.057604] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:21.875 [2024-11-29 16:03:33.057610] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:21.875 [2024-11-29 16:03:33.057617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:21.875 [2024-11-29 16:03:33.057623] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:21.875 [2024-11-29 16:03:33.057633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:21.875 [2024-11-29 16:03:33.057640] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:21.875 [2024-11-29 16:03:33.057648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:21.875 [2024-11-29 16:03:33.057657] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:21.875 [2024-11-29 16:03:33.057664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:21.875 [2024-11-29 16:03:33.057670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:21.875 [2024-11-29 16:03:33.057678] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:21.875 [2024-11-29 16:03:33.057686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:21.875 [2024-11-29 16:03:33.057724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:21.875 [2024-11-29 16:03:33.057732] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:21.875 [2024-11-29 16:03:33.057743] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:21.875 [2024-11-29 16:03:33.057751] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:21.875 [2024-11-29 16:03:33.057760] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:21.875 [2024-11-29 16:03:33.057767] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:21.875 [2024-11-29 16:03:33.057775] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:21.875 [2024-11-29 16:03:33.057782] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:21.875 [2024-11-29 16:03:33.057789] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:21.875 [2024-11-29 16:03:33.057797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:21.875 [2024-11-29 16:03:33.057804] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:21.875 [2024-11-29 16:03:33.057812] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:21.875 [2024-11-29 16:03:33.057819] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:21.875 [2024-11-29 16:03:33.057826] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:21.875 [2024-11-29 16:03:33.057834] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:21.875 [2024-11-29 16:03:33.057843] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:21.875 [2024-11-29 16:03:33.057850] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:21.875 [2024-11-29 16:03:33.057858] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:21.876 [2024-11-29 16:03:33.057867] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:21.876 [2024-11-29 16:03:33.057874] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:21.876 [2024-11-29 16:03:33.057882] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:21.876 [2024-11-29 16:03:33.057890] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:21.876 [2024-11-29 16:03:33.057898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.057906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:21.876 [2024-11-29 16:03:33.057913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:21:21.876 [2024-11-29 16:03:33.057921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.076665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.076724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:21.876 [2024-11-29 16:03:33.076737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.698 ms 00:21:21.876 [2024-11-29 16:03:33.076752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.076849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.076858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:21.876 [2024-11-29 16:03:33.076867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:21.876 [2024-11-29 16:03:33.076875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.123246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.123458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:21.876 [2024-11-29 16:03:33.123481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.315 ms 00:21:21.876 [2024-11-29 16:03:33.123489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.123543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.123553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:21.876 [2024-11-29 16:03:33.123562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:21.876 [2024-11-29 16:03:33.123570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.124181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.124214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:21.876 [2024-11-29 16:03:33.124225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:21:21.876 [2024-11-29 16:03:33.124239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.124370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.124387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:21.876 [2024-11-29 16:03:33.124396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:21.876 [2024-11-29 16:03:33.124404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.141145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.141191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:21.876 [2024-11-29 16:03:33.141203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.715 ms 00:21:21.876 [2024-11-29 16:03:33.141210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.155610] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:21.876 [2024-11-29 16:03:33.155659] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:21.876 [2024-11-29 16:03:33.155672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.155680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:21.876 [2024-11-29 16:03:33.155690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.346 ms 00:21:21.876 [2024-11-29 16:03:33.155698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.182019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.182217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:21.876 [2024-11-29 16:03:33.182239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.264 ms 00:21:21.876 [2024-11-29 16:03:33.182248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.195786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.195834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:21.876 [2024-11-29 16:03:33.195847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.495 ms 00:21:21.876 [2024-11-29 16:03:33.195855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.208758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.208805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:21.876 [2024-11-29 16:03:33.208828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.851 ms 00:21:21.876 [2024-11-29 16:03:33.208835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.209273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.209289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:21.876 [2024-11-29 16:03:33.209299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:21:21.876 [2024-11-29 16:03:33.209307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.277018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.277074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:21.876 [2024-11-29 16:03:33.277090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.693 ms 00:21:21.876 [2024-11-29 16:03:33.277099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.288580] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:21.876 [2024-11-29 16:03:33.291780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.291828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:21.876 [2024-11-29 16:03:33.291842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.618 ms 00:21:21.876 [2024-11-29 16:03:33.291857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.291933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.291944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:21.876 [2024-11-29 16:03:33.291953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:21.876 [2024-11-29 16:03:33.291961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.293513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.293563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:21.876 [2024-11-29 16:03:33.293573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.351 ms 00:21:21.876 [2024-11-29 16:03:33.293582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.295019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.295056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:21.876 [2024-11-29 16:03:33.295067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:21:21.876 [2024-11-29 16:03:33.295075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.295110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.295119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:21.876 [2024-11-29 16:03:33.295132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:21.876 [2024-11-29 16:03:33.295140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.876 [2024-11-29 16:03:33.295177] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:21.876 [2024-11-29 16:03:33.295187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.876 [2024-11-29 16:03:33.295198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:21.876 [2024-11-29 16:03:33.295206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:21.876 [2024-11-29 16:03:33.295214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-29 16:03:33.322685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-29 16:03:33.322876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:22.137 [2024-11-29 16:03:33.322898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.450 ms 00:21:22.137 [2024-11-29 16:03:33.322908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-29 16:03:33.323017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.137 [2024-11-29 16:03:33.323028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:22.137 [2024-11-29 16:03:33.323038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:22.137 [2024-11-29 16:03:33.323046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.137 [2024-11-29 16:03:33.329181] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.741 ms, result 0 00:21:23.519  [2024-11-29T16:03:35.532Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-29T16:03:36.539Z] Copying: 50/1024 [MB] (27 MBps) [2024-11-29T16:03:37.925Z] Copying: 74/1024 [MB] (23 MBps) [2024-11-29T16:03:38.867Z] Copying: 98/1024 [MB] (24 MBps) [2024-11-29T16:03:39.811Z] Copying: 120/1024 [MB] (21 MBps) [2024-11-29T16:03:40.753Z] Copying: 135/1024 [MB] (15 MBps) [2024-11-29T16:03:41.696Z] Copying: 153/1024 [MB] (18 MBps) [2024-11-29T16:03:42.637Z] Copying: 168/1024 [MB] (14 MBps) [2024-11-29T16:03:43.582Z] Copying: 187/1024 [MB] (19 MBps) [2024-11-29T16:03:44.528Z] Copying: 199/1024 [MB] (11 MBps) [2024-11-29T16:03:45.913Z] Copying: 213/1024 [MB] (13 MBps) [2024-11-29T16:03:46.859Z] Copying: 226/1024 [MB] (13 MBps) [2024-11-29T16:03:47.805Z] Copying: 238/1024 [MB] (11 MBps) [2024-11-29T16:03:48.747Z] Copying: 254/1024 [MB] (15 MBps) [2024-11-29T16:03:49.690Z] Copying: 265/1024 [MB] (10 MBps) [2024-11-29T16:03:50.632Z] Copying: 277/1024 [MB] (12 MBps) [2024-11-29T16:03:51.572Z] Copying: 291/1024 [MB] (14 MBps) [2024-11-29T16:03:52.953Z] Copying: 304/1024 [MB] (12 MBps) [2024-11-29T16:03:53.525Z] Copying: 316/1024 [MB] (11 MBps) [2024-11-29T16:03:54.906Z] Copying: 330/1024 [MB] (14 MBps) [2024-11-29T16:03:55.843Z] Copying: 345/1024 [MB] (15 MBps) [2024-11-29T16:03:56.783Z] Copying: 370/1024 [MB] (25 MBps) [2024-11-29T16:03:57.723Z] Copying: 386/1024 [MB] (16 MBps) [2024-11-29T16:03:58.664Z] Copying: 418/1024 [MB] (31 MBps) [2024-11-29T16:03:59.604Z] Copying: 441/1024 [MB] (23 MBps) [2024-11-29T16:04:00.545Z] Copying: 463/1024 [MB] (22 MBps) [2024-11-29T16:04:01.929Z] Copying: 480/1024 [MB] (17 MBps) [2024-11-29T16:04:02.871Z] Copying: 502/1024 [MB] (21 MBps) [2024-11-29T16:04:03.810Z] Copying: 519/1024 [MB] (16 MBps) [2024-11-29T16:04:04.754Z] Copying: 540/1024 [MB] (21 MBps) [2024-11-29T16:04:05.741Z] Copying: 561/1024 [MB] (20 MBps) [2024-11-29T16:04:06.695Z] Copying: 574/1024 [MB] (12 MBps) [2024-11-29T16:04:07.640Z] Copying: 585/1024 [MB] (11 MBps) [2024-11-29T16:04:08.585Z] Copying: 600/1024 [MB] (15 MBps) [2024-11-29T16:04:09.529Z] Copying: 619/1024 [MB] (18 MBps) [2024-11-29T16:04:10.915Z] Copying: 638/1024 [MB] (19 MBps) [2024-11-29T16:04:11.860Z] Copying: 655/1024 [MB] (16 MBps) [2024-11-29T16:04:12.806Z] Copying: 676/1024 [MB] (21 MBps) [2024-11-29T16:04:13.751Z] Copying: 696/1024 [MB] (19 MBps) [2024-11-29T16:04:14.696Z] Copying: 710/1024 [MB] (13 MBps) [2024-11-29T16:04:15.646Z] Copying: 721/1024 [MB] (10 MBps) [2024-11-29T16:04:16.584Z] Copying: 741/1024 [MB] (20 MBps) [2024-11-29T16:04:17.620Z] Copying: 754/1024 [MB] (12 MBps) [2024-11-29T16:04:18.557Z] Copying: 775/1024 [MB] (21 MBps) [2024-11-29T16:04:19.945Z] Copying: 798/1024 [MB] (22 MBps) [2024-11-29T16:04:20.519Z] Copying: 810/1024 [MB] (11 MBps) [2024-11-29T16:04:21.907Z] Copying: 820/1024 [MB] (10 MBps) [2024-11-29T16:04:22.853Z] Copying: 831/1024 [MB] (10 MBps) [2024-11-29T16:04:23.799Z] Copying: 843/1024 [MB] (11 MBps) [2024-11-29T16:04:24.744Z] Copying: 854/1024 [MB] (11 MBps) [2024-11-29T16:04:25.689Z] Copying: 866/1024 [MB] (11 MBps) [2024-11-29T16:04:26.635Z] Copying: 878/1024 [MB] (11 MBps) [2024-11-29T16:04:27.580Z] Copying: 889/1024 [MB] (11 MBps) [2024-11-29T16:04:28.525Z] Copying: 900/1024 [MB] (11 MBps) [2024-11-29T16:04:29.915Z] Copying: 912/1024 [MB] (11 MBps) [2024-11-29T16:04:30.859Z] Copying: 923/1024 [MB] (11 MBps) [2024-11-29T16:04:31.808Z] Copying: 935/1024 [MB] (11 MBps) [2024-11-29T16:04:32.754Z] Copying: 946/1024 [MB] (11 MBps) [2024-11-29T16:04:33.701Z] Copying: 958/1024 [MB] (11 MBps) [2024-11-29T16:04:34.649Z] Copying: 970/1024 [MB] (12 MBps) [2024-11-29T16:04:35.595Z] Copying: 981/1024 [MB] (10 MBps) [2024-11-29T16:04:36.540Z] Copying: 991/1024 [MB] (10 MBps) [2024-11-29T16:04:37.931Z] Copying: 1002/1024 [MB] (10 MBps) [2024-11-29T16:04:37.931Z] Copying: 1023/1024 [MB] (20 MBps) [2024-11-29T16:04:37.931Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-29 16:04:37.894143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.500 [2024-11-29 16:04:37.894235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:26.500 [2024-11-29 16:04:37.894279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:26.500 [2024-11-29 16:04:37.894290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.500 [2024-11-29 16:04:37.894319] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:26.500 [2024-11-29 16:04:37.898000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.500 [2024-11-29 16:04:37.898058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:26.500 [2024-11-29 16:04:37.898069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.661 ms 00:22:26.500 [2024-11-29 16:04:37.898078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.500 [2024-11-29 16:04:37.898351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.500 [2024-11-29 16:04:37.898362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:26.500 [2024-11-29 16:04:37.898376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:22:26.500 [2024-11-29 16:04:37.898385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.500 [2024-11-29 16:04:37.904842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.500 [2024-11-29 16:04:37.905040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:26.500 [2024-11-29 16:04:37.905118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.439 ms 00:22:26.500 [2024-11-29 16:04:37.905145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.500 [2024-11-29 16:04:37.911760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.500 [2024-11-29 16:04:37.911915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:26.500 [2024-11-29 16:04:37.912010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.562 ms 00:22:26.500 [2024-11-29 16:04:37.912049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.763 [2024-11-29 16:04:37.941984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.763 [2024-11-29 16:04:37.942164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:26.763 [2024-11-29 16:04:37.942231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.856 ms 00:22:26.763 [2024-11-29 16:04:37.942255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.763 [2024-11-29 16:04:37.958024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.763 [2024-11-29 16:04:37.958191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:26.763 [2024-11-29 16:04:37.958213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.642 ms 00:22:26.763 [2024-11-29 16:04:37.958221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.763 [2024-11-29 16:04:38.187493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.763 [2024-11-29 16:04:38.187540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:26.763 [2024-11-29 16:04:38.187555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 229.148 ms 00:22:26.763 [2024-11-29 16:04:38.187563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.027 [2024-11-29 16:04:38.214474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.027 [2024-11-29 16:04:38.214636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:27.027 [2024-11-29 16:04:38.214657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.886 ms 00:22:27.027 [2024-11-29 16:04:38.214665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.027 [2024-11-29 16:04:38.240348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.027 [2024-11-29 16:04:38.240383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:27.027 [2024-11-29 16:04:38.240396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.647 ms 00:22:27.027 [2024-11-29 16:04:38.240416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.027 [2024-11-29 16:04:38.265399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.027 [2024-11-29 16:04:38.265433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:27.027 [2024-11-29 16:04:38.265445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.938 ms 00:22:27.027 [2024-11-29 16:04:38.265452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.027 [2024-11-29 16:04:38.290712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.027 [2024-11-29 16:04:38.290746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:27.027 [2024-11-29 16:04:38.290758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.171 ms 00:22:27.027 [2024-11-29 16:04:38.290765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.027 [2024-11-29 16:04:38.290810] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:27.027 [2024-11-29 16:04:38.290825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:22:27.027 [2024-11-29 16:04:38.290837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.290999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:27.027 [2024-11-29 16:04:38.291427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:27.028 [2024-11-29 16:04:38.291652] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:27.028 [2024-11-29 16:04:38.291661] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2b5b9a76-0a17-4aa5-bab0-1e3efc4353d5 00:22:27.028 [2024-11-29 16:04:38.291669] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:22:27.028 [2024-11-29 16:04:38.291677] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 44992 00:22:27.028 [2024-11-29 16:04:38.291684] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 44032 00:22:27.028 [2024-11-29 16:04:38.291700] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0218 00:22:27.028 [2024-11-29 16:04:38.291707] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:27.028 [2024-11-29 16:04:38.291715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:27.028 [2024-11-29 16:04:38.291723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:27.028 [2024-11-29 16:04:38.291730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:27.028 [2024-11-29 16:04:38.291744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:27.028 [2024-11-29 16:04:38.291752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.028 [2024-11-29 16:04:38.291774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:27.028 [2024-11-29 16:04:38.291783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:22:27.028 [2024-11-29 16:04:38.291791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.028 [2024-11-29 16:04:38.305208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.028 [2024-11-29 16:04:38.305245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:27.028 [2024-11-29 16:04:38.305258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.384 ms 00:22:27.028 [2024-11-29 16:04:38.305266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.028 [2024-11-29 16:04:38.305495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.028 [2024-11-29 16:04:38.305505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:27.028 [2024-11-29 16:04:38.305513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:22:27.028 [2024-11-29 16:04:38.305520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.028 [2024-11-29 16:04:38.344205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.028 [2024-11-29 16:04:38.344380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:27.028 [2024-11-29 16:04:38.344400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.028 [2024-11-29 16:04:38.344409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.028 [2024-11-29 16:04:38.344487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.028 [2024-11-29 16:04:38.344495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:27.028 [2024-11-29 16:04:38.344504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.028 [2024-11-29 16:04:38.344511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.028 [2024-11-29 16:04:38.344587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.028 [2024-11-29 16:04:38.344601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:27.028 [2024-11-29 16:04:38.344609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.028 [2024-11-29 16:04:38.344618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.028 [2024-11-29 16:04:38.344633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.028 [2024-11-29 16:04:38.344641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:27.028 [2024-11-29 16:04:38.344649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.028 [2024-11-29 16:04:38.344656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.028 [2024-11-29 16:04:38.425039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.028 [2024-11-29 16:04:38.425087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:27.028 [2024-11-29 16:04:38.425100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.028 [2024-11-29 16:04:38.425108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.290 [2024-11-29 16:04:38.456639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.290 [2024-11-29 16:04:38.456686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:27.290 [2024-11-29 16:04:38.456697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.290 [2024-11-29 16:04:38.456706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.290 [2024-11-29 16:04:38.456771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.290 [2024-11-29 16:04:38.456781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:27.290 [2024-11-29 16:04:38.456796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.290 [2024-11-29 16:04:38.456804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.290 [2024-11-29 16:04:38.456847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.290 [2024-11-29 16:04:38.456856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:27.290 [2024-11-29 16:04:38.456865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.290 [2024-11-29 16:04:38.456873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.290 [2024-11-29 16:04:38.457007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.290 [2024-11-29 16:04:38.457019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:27.290 [2024-11-29 16:04:38.457027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.290 [2024-11-29 16:04:38.457038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.290 [2024-11-29 16:04:38.457072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.290 [2024-11-29 16:04:38.457082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:27.290 [2024-11-29 16:04:38.457090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.290 [2024-11-29 16:04:38.457099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.290 [2024-11-29 16:04:38.457139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.290 [2024-11-29 16:04:38.457148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:27.290 [2024-11-29 16:04:38.457157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.290 [2024-11-29 16:04:38.457167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.290 [2024-11-29 16:04:38.457214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.290 [2024-11-29 16:04:38.457224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:27.290 [2024-11-29 16:04:38.457232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.290 [2024-11-29 16:04:38.457240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.290 [2024-11-29 16:04:38.457375] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 563.208 ms, result 0 00:22:28.235 00:22:28.235 00:22:28.235 16:04:39 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:30.153 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:30.153 16:04:41 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:30.153 16:04:41 -- ftl/restore.sh@85 -- # restore_kill 00:22:30.153 16:04:41 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:30.153 16:04:41 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:30.153 16:04:41 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:30.153 Process with pid 72735 is not found 00:22:30.153 Remove shared memory files 00:22:30.153 16:04:41 -- ftl/restore.sh@32 -- # killprocess 72735 00:22:30.153 16:04:41 -- common/autotest_common.sh@936 -- # '[' -z 72735 ']' 00:22:30.153 16:04:41 -- common/autotest_common.sh@940 -- # kill -0 72735 00:22:30.153 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72735) - No such process 00:22:30.153 16:04:41 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72735 is not found' 00:22:30.153 16:04:41 -- ftl/restore.sh@33 -- # remove_shm 00:22:30.153 16:04:41 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:30.153 16:04:41 -- ftl/common.sh@205 -- # rm -f rm -f 00:22:30.153 16:04:41 -- ftl/common.sh@206 -- # rm -f rm -f 00:22:30.153 16:04:41 -- ftl/common.sh@207 -- # rm -f rm -f 00:22:30.153 16:04:41 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:30.153 16:04:41 -- ftl/common.sh@209 -- # rm -f rm -f 00:22:30.153 ************************************ 00:22:30.153 END TEST ftl_restore 00:22:30.153 ************************************ 00:22:30.153 00:22:30.153 real 4m53.841s 00:22:30.153 user 4m39.857s 00:22:30.153 sys 0m13.378s 00:22:30.153 16:04:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:30.153 16:04:41 -- common/autotest_common.sh@10 -- # set +x 00:22:30.413 16:04:41 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:30.413 16:04:41 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:30.413 16:04:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:30.413 16:04:41 -- common/autotest_common.sh@10 -- # set +x 00:22:30.413 ************************************ 00:22:30.413 START TEST ftl_dirty_shutdown 00:22:30.413 ************************************ 00:22:30.413 16:04:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:30.413 * Looking for test storage... 00:22:30.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:30.413 16:04:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:30.413 16:04:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:30.413 16:04:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:30.413 16:04:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:30.413 16:04:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:30.413 16:04:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:30.413 16:04:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:30.413 16:04:41 -- scripts/common.sh@335 -- # IFS=.-: 00:22:30.413 16:04:41 -- scripts/common.sh@335 -- # read -ra ver1 00:22:30.413 16:04:41 -- scripts/common.sh@336 -- # IFS=.-: 00:22:30.413 16:04:41 -- scripts/common.sh@336 -- # read -ra ver2 00:22:30.413 16:04:41 -- scripts/common.sh@337 -- # local 'op=<' 00:22:30.413 16:04:41 -- scripts/common.sh@339 -- # ver1_l=2 00:22:30.413 16:04:41 -- scripts/common.sh@340 -- # ver2_l=1 00:22:30.413 16:04:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:30.413 16:04:41 -- scripts/common.sh@343 -- # case "$op" in 00:22:30.413 16:04:41 -- scripts/common.sh@344 -- # : 1 00:22:30.413 16:04:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:30.413 16:04:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:30.413 16:04:41 -- scripts/common.sh@364 -- # decimal 1 00:22:30.413 16:04:41 -- scripts/common.sh@352 -- # local d=1 00:22:30.413 16:04:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:30.413 16:04:41 -- scripts/common.sh@354 -- # echo 1 00:22:30.413 16:04:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:30.413 16:04:41 -- scripts/common.sh@365 -- # decimal 2 00:22:30.413 16:04:41 -- scripts/common.sh@352 -- # local d=2 00:22:30.413 16:04:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:30.413 16:04:41 -- scripts/common.sh@354 -- # echo 2 00:22:30.413 16:04:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:30.413 16:04:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:30.413 16:04:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:30.413 16:04:41 -- scripts/common.sh@367 -- # return 0 00:22:30.413 16:04:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:30.413 16:04:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:30.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.413 --rc genhtml_branch_coverage=1 00:22:30.413 --rc genhtml_function_coverage=1 00:22:30.413 --rc genhtml_legend=1 00:22:30.413 --rc geninfo_all_blocks=1 00:22:30.413 --rc geninfo_unexecuted_blocks=1 00:22:30.413 00:22:30.413 ' 00:22:30.413 16:04:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:30.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.413 --rc genhtml_branch_coverage=1 00:22:30.413 --rc genhtml_function_coverage=1 00:22:30.413 --rc genhtml_legend=1 00:22:30.413 --rc geninfo_all_blocks=1 00:22:30.413 --rc geninfo_unexecuted_blocks=1 00:22:30.413 00:22:30.413 ' 00:22:30.413 16:04:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:30.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.413 --rc genhtml_branch_coverage=1 00:22:30.413 --rc genhtml_function_coverage=1 00:22:30.413 --rc genhtml_legend=1 00:22:30.413 --rc geninfo_all_blocks=1 00:22:30.413 --rc geninfo_unexecuted_blocks=1 00:22:30.413 00:22:30.413 ' 00:22:30.413 16:04:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:30.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.413 --rc genhtml_branch_coverage=1 00:22:30.414 --rc genhtml_function_coverage=1 00:22:30.414 --rc genhtml_legend=1 00:22:30.414 --rc geninfo_all_blocks=1 00:22:30.414 --rc geninfo_unexecuted_blocks=1 00:22:30.414 00:22:30.414 ' 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:30.414 16:04:41 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:30.414 16:04:41 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:30.414 16:04:41 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:30.414 16:04:41 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:30.414 16:04:41 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:30.414 16:04:41 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:30.414 16:04:41 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:30.414 16:04:41 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:30.414 16:04:41 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:30.414 16:04:41 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:30.414 16:04:41 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:30.414 16:04:41 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:30.414 16:04:41 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:30.414 16:04:41 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:30.414 16:04:41 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:30.414 16:04:41 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:30.414 16:04:41 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:30.414 16:04:41 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:30.414 16:04:41 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:30.414 16:04:41 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:30.414 16:04:41 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:30.414 16:04:41 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:30.414 16:04:41 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:30.414 16:04:41 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:30.414 16:04:41 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:30.414 16:04:41 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:30.414 16:04:41 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:30.414 16:04:41 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@45 -- # svcpid=75877 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75877 00:22:30.414 16:04:41 -- common/autotest_common.sh@829 -- # '[' -z 75877 ']' 00:22:30.414 16:04:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.414 16:04:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.414 16:04:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.414 16:04:41 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:30.414 16:04:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.414 16:04:41 -- common/autotest_common.sh@10 -- # set +x 00:22:30.675 [2024-11-29 16:04:41.849032] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:30.675 [2024-11-29 16:04:41.849502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75877 ] 00:22:30.675 [2024-11-29 16:04:41.996542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.937 [2024-11-29 16:04:42.178360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:30.937 [2024-11-29 16:04:42.178559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.325 16:04:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:32.325 16:04:43 -- common/autotest_common.sh@862 -- # return 0 00:22:32.325 16:04:43 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:22:32.325 16:04:43 -- ftl/common.sh@54 -- # local name=nvme0 00:22:32.325 16:04:43 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:22:32.325 16:04:43 -- ftl/common.sh@56 -- # local size=103424 00:22:32.325 16:04:43 -- ftl/common.sh@59 -- # local base_bdev 00:22:32.325 16:04:43 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:22:32.325 16:04:43 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:32.325 16:04:43 -- ftl/common.sh@62 -- # local base_size 00:22:32.325 16:04:43 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:32.325 16:04:43 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:22:32.325 16:04:43 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:32.325 16:04:43 -- common/autotest_common.sh@1369 -- # local bs 00:22:32.325 16:04:43 -- common/autotest_common.sh@1370 -- # local nb 00:22:32.325 16:04:43 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:32.584 16:04:43 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:32.584 { 00:22:32.584 "name": "nvme0n1", 00:22:32.584 "aliases": [ 00:22:32.584 "257141f8-1b95-46e4-bb03-433c1d97a72e" 00:22:32.584 ], 00:22:32.584 "product_name": "NVMe disk", 00:22:32.584 "block_size": 4096, 00:22:32.584 "num_blocks": 1310720, 00:22:32.584 "uuid": "257141f8-1b95-46e4-bb03-433c1d97a72e", 00:22:32.584 "assigned_rate_limits": { 00:22:32.584 "rw_ios_per_sec": 0, 00:22:32.584 "rw_mbytes_per_sec": 0, 00:22:32.584 "r_mbytes_per_sec": 0, 00:22:32.584 "w_mbytes_per_sec": 0 00:22:32.584 }, 00:22:32.584 "claimed": true, 00:22:32.584 "claim_type": "read_many_write_one", 00:22:32.584 "zoned": false, 00:22:32.584 "supported_io_types": { 00:22:32.584 "read": true, 00:22:32.584 "write": true, 00:22:32.584 "unmap": true, 00:22:32.584 "write_zeroes": true, 00:22:32.584 "flush": true, 00:22:32.584 "reset": true, 00:22:32.584 "compare": true, 00:22:32.584 "compare_and_write": false, 00:22:32.584 "abort": true, 00:22:32.584 "nvme_admin": true, 00:22:32.584 "nvme_io": true 00:22:32.584 }, 00:22:32.584 "driver_specific": { 00:22:32.584 "nvme": [ 00:22:32.584 { 00:22:32.584 "pci_address": "0000:00:07.0", 00:22:32.584 "trid": { 00:22:32.584 "trtype": "PCIe", 00:22:32.584 "traddr": "0000:00:07.0" 00:22:32.584 }, 00:22:32.584 "ctrlr_data": { 00:22:32.584 "cntlid": 0, 00:22:32.584 "vendor_id": "0x1b36", 00:22:32.584 "model_number": "QEMU NVMe Ctrl", 00:22:32.584 "serial_number": "12341", 00:22:32.584 "firmware_revision": "8.0.0", 00:22:32.584 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:32.584 "oacs": { 00:22:32.584 "security": 0, 00:22:32.584 "format": 1, 00:22:32.584 "firmware": 0, 00:22:32.584 "ns_manage": 1 00:22:32.584 }, 00:22:32.584 "multi_ctrlr": false, 00:22:32.584 "ana_reporting": false 00:22:32.584 }, 00:22:32.584 "vs": { 00:22:32.584 "nvme_version": "1.4" 00:22:32.584 }, 00:22:32.584 "ns_data": { 00:22:32.584 "id": 1, 00:22:32.584 "can_share": false 00:22:32.584 } 00:22:32.584 } 00:22:32.584 ], 00:22:32.584 "mp_policy": "active_passive" 00:22:32.584 } 00:22:32.584 } 00:22:32.584 ]' 00:22:32.584 16:04:43 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:32.584 16:04:43 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:32.584 16:04:43 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:32.584 16:04:43 -- common/autotest_common.sh@1373 -- # nb=1310720 00:22:32.584 16:04:43 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:22:32.584 16:04:43 -- common/autotest_common.sh@1377 -- # echo 5120 00:22:32.584 16:04:43 -- ftl/common.sh@63 -- # base_size=5120 00:22:32.584 16:04:43 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:32.584 16:04:43 -- ftl/common.sh@67 -- # clear_lvols 00:22:32.584 16:04:43 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:32.584 16:04:43 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:32.844 16:04:44 -- ftl/common.sh@28 -- # stores=98ccdb1f-c57f-40b2-8a47-adf8f53d3bff 00:22:32.844 16:04:44 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:32.844 16:04:44 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 98ccdb1f-c57f-40b2-8a47-adf8f53d3bff 00:22:32.844 16:04:44 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:33.102 16:04:44 -- ftl/common.sh@68 -- # lvs=e0eee463-6703-4c6d-81fc-2ce48274cdbd 00:22:33.102 16:04:44 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e0eee463-6703-4c6d-81fc-2ce48274cdbd 00:22:33.360 16:04:44 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=d949cada-45ad-4d5a-93f6-6a8f7044eb62 00:22:33.360 16:04:44 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:22:33.360 16:04:44 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 d949cada-45ad-4d5a-93f6-6a8f7044eb62 00:22:33.360 16:04:44 -- ftl/common.sh@35 -- # local name=nvc0 00:22:33.360 16:04:44 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:22:33.360 16:04:44 -- ftl/common.sh@37 -- # local base_bdev=d949cada-45ad-4d5a-93f6-6a8f7044eb62 00:22:33.360 16:04:44 -- ftl/common.sh@38 -- # local cache_size= 00:22:33.360 16:04:44 -- ftl/common.sh@41 -- # get_bdev_size d949cada-45ad-4d5a-93f6-6a8f7044eb62 00:22:33.360 16:04:44 -- common/autotest_common.sh@1367 -- # local bdev_name=d949cada-45ad-4d5a-93f6-6a8f7044eb62 00:22:33.360 16:04:44 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:33.360 16:04:44 -- common/autotest_common.sh@1369 -- # local bs 00:22:33.360 16:04:44 -- common/autotest_common.sh@1370 -- # local nb 00:22:33.360 16:04:44 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d949cada-45ad-4d5a-93f6-6a8f7044eb62 00:22:33.619 16:04:44 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:33.619 { 00:22:33.619 "name": "d949cada-45ad-4d5a-93f6-6a8f7044eb62", 00:22:33.619 "aliases": [ 00:22:33.619 "lvs/nvme0n1p0" 00:22:33.619 ], 00:22:33.619 "product_name": "Logical Volume", 00:22:33.619 "block_size": 4096, 00:22:33.619 "num_blocks": 26476544, 00:22:33.619 "uuid": "d949cada-45ad-4d5a-93f6-6a8f7044eb62", 00:22:33.619 "assigned_rate_limits": { 00:22:33.619 "rw_ios_per_sec": 0, 00:22:33.619 "rw_mbytes_per_sec": 0, 00:22:33.619 "r_mbytes_per_sec": 0, 00:22:33.619 "w_mbytes_per_sec": 0 00:22:33.619 }, 00:22:33.619 "claimed": false, 00:22:33.619 "zoned": false, 00:22:33.619 "supported_io_types": { 00:22:33.619 "read": true, 00:22:33.619 "write": true, 00:22:33.619 "unmap": true, 00:22:33.619 "write_zeroes": true, 00:22:33.619 "flush": false, 00:22:33.619 "reset": true, 00:22:33.619 "compare": false, 00:22:33.619 "compare_and_write": false, 00:22:33.619 "abort": false, 00:22:33.619 "nvme_admin": false, 00:22:33.619 "nvme_io": false 00:22:33.619 }, 00:22:33.619 "driver_specific": { 00:22:33.619 "lvol": { 00:22:33.619 "lvol_store_uuid": "e0eee463-6703-4c6d-81fc-2ce48274cdbd", 00:22:33.619 "base_bdev": "nvme0n1", 00:22:33.619 "thin_provision": true, 00:22:33.619 "snapshot": false, 00:22:33.619 "clone": false, 00:22:33.619 "esnap_clone": false 00:22:33.619 } 00:22:33.619 } 00:22:33.619 } 00:22:33.619 ]' 00:22:33.619 16:04:44 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:33.619 16:04:44 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:33.619 16:04:44 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:33.619 16:04:44 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:33.619 16:04:44 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:33.619 16:04:44 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:33.619 16:04:44 -- ftl/common.sh@41 -- # local base_size=5171 00:22:33.619 16:04:44 -- ftl/common.sh@44 -- # local nvc_bdev 00:22:33.619 16:04:44 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:22:33.878 16:04:45 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:33.878 16:04:45 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:33.878 16:04:45 -- ftl/common.sh@48 -- # get_bdev_size d949cada-45ad-4d5a-93f6-6a8f7044eb62 00:22:33.878 16:04:45 -- common/autotest_common.sh@1367 -- # local bdev_name=d949cada-45ad-4d5a-93f6-6a8f7044eb62 00:22:33.878 16:04:45 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:33.878 16:04:45 -- common/autotest_common.sh@1369 -- # local bs 00:22:33.878 16:04:45 -- common/autotest_common.sh@1370 -- # local nb 00:22:33.878 16:04:45 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d949cada-45ad-4d5a-93f6-6a8f7044eb62 00:22:34.138 16:04:45 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:34.138 { 00:22:34.138 "name": "d949cada-45ad-4d5a-93f6-6a8f7044eb62", 00:22:34.138 "aliases": [ 00:22:34.138 "lvs/nvme0n1p0" 00:22:34.138 ], 00:22:34.138 "product_name": "Logical Volume", 00:22:34.138 "block_size": 4096, 00:22:34.138 "num_blocks": 26476544, 00:22:34.138 "uuid": "d949cada-45ad-4d5a-93f6-6a8f7044eb62", 00:22:34.138 "assigned_rate_limits": { 00:22:34.138 "rw_ios_per_sec": 0, 00:22:34.138 "rw_mbytes_per_sec": 0, 00:22:34.138 "r_mbytes_per_sec": 0, 00:22:34.138 "w_mbytes_per_sec": 0 00:22:34.138 }, 00:22:34.138 "claimed": false, 00:22:34.138 "zoned": false, 00:22:34.138 "supported_io_types": { 00:22:34.138 "read": true, 00:22:34.138 "write": true, 00:22:34.138 "unmap": true, 00:22:34.138 "write_zeroes": true, 00:22:34.138 "flush": false, 00:22:34.138 "reset": true, 00:22:34.138 "compare": false, 00:22:34.138 "compare_and_write": false, 00:22:34.138 "abort": false, 00:22:34.138 "nvme_admin": false, 00:22:34.138 "nvme_io": false 00:22:34.138 }, 00:22:34.138 "driver_specific": { 00:22:34.138 "lvol": { 00:22:34.138 "lvol_store_uuid": "e0eee463-6703-4c6d-81fc-2ce48274cdbd", 00:22:34.138 "base_bdev": "nvme0n1", 00:22:34.138 "thin_provision": true, 00:22:34.138 "snapshot": false, 00:22:34.138 "clone": false, 00:22:34.138 "esnap_clone": false 00:22:34.138 } 00:22:34.138 } 00:22:34.138 } 00:22:34.138 ]' 00:22:34.138 16:04:45 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:34.138 16:04:45 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:34.138 16:04:45 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:34.138 16:04:45 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:34.138 16:04:45 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:34.139 16:04:45 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:34.139 16:04:45 -- ftl/common.sh@48 -- # cache_size=5171 00:22:34.139 16:04:45 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:34.401 16:04:45 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:34.401 16:04:45 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size d949cada-45ad-4d5a-93f6-6a8f7044eb62 00:22:34.401 16:04:45 -- common/autotest_common.sh@1367 -- # local bdev_name=d949cada-45ad-4d5a-93f6-6a8f7044eb62 00:22:34.401 16:04:45 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:34.401 16:04:45 -- common/autotest_common.sh@1369 -- # local bs 00:22:34.401 16:04:45 -- common/autotest_common.sh@1370 -- # local nb 00:22:34.401 16:04:45 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d949cada-45ad-4d5a-93f6-6a8f7044eb62 00:22:34.401 16:04:45 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:34.401 { 00:22:34.401 "name": "d949cada-45ad-4d5a-93f6-6a8f7044eb62", 00:22:34.401 "aliases": [ 00:22:34.401 "lvs/nvme0n1p0" 00:22:34.401 ], 00:22:34.401 "product_name": "Logical Volume", 00:22:34.401 "block_size": 4096, 00:22:34.401 "num_blocks": 26476544, 00:22:34.401 "uuid": "d949cada-45ad-4d5a-93f6-6a8f7044eb62", 00:22:34.401 "assigned_rate_limits": { 00:22:34.401 "rw_ios_per_sec": 0, 00:22:34.401 "rw_mbytes_per_sec": 0, 00:22:34.401 "r_mbytes_per_sec": 0, 00:22:34.401 "w_mbytes_per_sec": 0 00:22:34.401 }, 00:22:34.401 "claimed": false, 00:22:34.401 "zoned": false, 00:22:34.401 "supported_io_types": { 00:22:34.401 "read": true, 00:22:34.401 "write": true, 00:22:34.401 "unmap": true, 00:22:34.401 "write_zeroes": true, 00:22:34.401 "flush": false, 00:22:34.401 "reset": true, 00:22:34.401 "compare": false, 00:22:34.401 "compare_and_write": false, 00:22:34.401 "abort": false, 00:22:34.401 "nvme_admin": false, 00:22:34.401 "nvme_io": false 00:22:34.401 }, 00:22:34.401 "driver_specific": { 00:22:34.401 "lvol": { 00:22:34.401 "lvol_store_uuid": "e0eee463-6703-4c6d-81fc-2ce48274cdbd", 00:22:34.401 "base_bdev": "nvme0n1", 00:22:34.401 "thin_provision": true, 00:22:34.401 "snapshot": false, 00:22:34.401 "clone": false, 00:22:34.401 "esnap_clone": false 00:22:34.401 } 00:22:34.401 } 00:22:34.401 } 00:22:34.401 ]' 00:22:34.401 16:04:45 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:34.401 16:04:45 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:34.401 16:04:45 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:34.401 16:04:45 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:34.401 16:04:45 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:34.401 16:04:45 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:34.401 16:04:45 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:34.401 16:04:45 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d949cada-45ad-4d5a-93f6-6a8f7044eb62 --l2p_dram_limit 10' 00:22:34.401 16:04:45 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:34.401 16:04:45 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:22:34.401 16:04:45 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:34.401 16:04:45 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d949cada-45ad-4d5a-93f6-6a8f7044eb62 --l2p_dram_limit 10 -c nvc0n1p0 00:22:34.661 [2024-11-29 16:04:45.964814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.661 [2024-11-29 16:04:45.964876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:34.661 [2024-11-29 16:04:45.964893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:34.661 [2024-11-29 16:04:45.964903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.661 [2024-11-29 16:04:45.964962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.661 [2024-11-29 16:04:45.964986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:34.661 [2024-11-29 16:04:45.964996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:34.661 [2024-11-29 16:04:45.965003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.661 [2024-11-29 16:04:45.965023] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:34.661 [2024-11-29 16:04:45.965731] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:34.661 [2024-11-29 16:04:45.965762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.661 [2024-11-29 16:04:45.965770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:34.661 [2024-11-29 16:04:45.965780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:22:34.661 [2024-11-29 16:04:45.965786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.661 [2024-11-29 16:04:45.965955] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6de4d882-07fd-4d7b-8ead-4dd2e9318b0e 00:22:34.661 [2024-11-29 16:04:45.967551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.661 [2024-11-29 16:04:45.967595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:34.661 [2024-11-29 16:04:45.967605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:34.661 [2024-11-29 16:04:45.967614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.661 [2024-11-29 16:04:45.975315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.661 [2024-11-29 16:04:45.975354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:34.661 [2024-11-29 16:04:45.975362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.662 ms 00:22:34.661 [2024-11-29 16:04:45.975371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.661 [2024-11-29 16:04:45.975453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.661 [2024-11-29 16:04:45.975463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:34.661 [2024-11-29 16:04:45.975471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:34.661 [2024-11-29 16:04:45.975482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.661 [2024-11-29 16:04:45.975524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.661 [2024-11-29 16:04:45.975536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:34.661 [2024-11-29 16:04:45.975542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:34.661 [2024-11-29 16:04:45.975550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.661 [2024-11-29 16:04:45.975572] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:34.661 [2024-11-29 16:04:45.979240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.661 [2024-11-29 16:04:45.979275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:34.661 [2024-11-29 16:04:45.979285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.673 ms 00:22:34.661 [2024-11-29 16:04:45.979293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.661 [2024-11-29 16:04:45.979330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.661 [2024-11-29 16:04:45.979338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:34.661 [2024-11-29 16:04:45.979347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:34.661 [2024-11-29 16:04:45.979353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.661 [2024-11-29 16:04:45.979374] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:34.661 [2024-11-29 16:04:45.979472] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:34.661 [2024-11-29 16:04:45.979494] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:34.661 [2024-11-29 16:04:45.979503] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:34.661 [2024-11-29 16:04:45.979514] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:34.661 [2024-11-29 16:04:45.979523] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:34.661 [2024-11-29 16:04:45.979534] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:34.661 [2024-11-29 16:04:45.979548] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:34.661 [2024-11-29 16:04:45.979557] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:34.661 [2024-11-29 16:04:45.979564] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:34.661 [2024-11-29 16:04:45.979572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.661 [2024-11-29 16:04:45.979578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:34.661 [2024-11-29 16:04:45.979613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:22:34.661 [2024-11-29 16:04:45.979624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.661 [2024-11-29 16:04:45.979675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.661 [2024-11-29 16:04:45.979682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:34.661 [2024-11-29 16:04:45.979690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:34.661 [2024-11-29 16:04:45.979698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.661 [2024-11-29 16:04:45.979772] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:34.661 [2024-11-29 16:04:45.979780] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:34.661 [2024-11-29 16:04:45.979789] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:34.661 [2024-11-29 16:04:45.979797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:34.661 [2024-11-29 16:04:45.979805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:34.661 [2024-11-29 16:04:45.979810] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:34.662 [2024-11-29 16:04:45.979817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:34.662 [2024-11-29 16:04:45.979823] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:34.662 [2024-11-29 16:04:45.979830] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:34.662 [2024-11-29 16:04:45.979836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:34.662 [2024-11-29 16:04:45.979843] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:34.662 [2024-11-29 16:04:45.979848] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:34.662 [2024-11-29 16:04:45.979856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:34.662 [2024-11-29 16:04:45.979861] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:34.662 [2024-11-29 16:04:45.979868] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:34.662 [2024-11-29 16:04:45.979873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:34.662 [2024-11-29 16:04:45.979883] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:34.662 [2024-11-29 16:04:45.979888] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:34.662 [2024-11-29 16:04:45.979895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:34.662 [2024-11-29 16:04:45.979900] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:34.662 [2024-11-29 16:04:45.979907] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:34.662 [2024-11-29 16:04:45.979915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:34.662 [2024-11-29 16:04:45.979923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:34.662 [2024-11-29 16:04:45.979928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:34.662 [2024-11-29 16:04:45.979935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:34.662 [2024-11-29 16:04:45.979940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:34.662 [2024-11-29 16:04:45.979946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:34.662 [2024-11-29 16:04:45.979951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:34.662 [2024-11-29 16:04:45.979958] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:34.662 [2024-11-29 16:04:45.979964] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:34.662 [2024-11-29 16:04:45.979988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:34.662 [2024-11-29 16:04:45.979994] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:34.662 [2024-11-29 16:04:45.980003] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:34.662 [2024-11-29 16:04:45.980008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:34.662 [2024-11-29 16:04:45.980015] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:34.662 [2024-11-29 16:04:45.980021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:34.662 [2024-11-29 16:04:45.980028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:34.662 [2024-11-29 16:04:45.980034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:34.662 [2024-11-29 16:04:45.980042] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:34.662 [2024-11-29 16:04:45.980047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:34.662 [2024-11-29 16:04:45.980053] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:34.662 [2024-11-29 16:04:45.980060] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:34.662 [2024-11-29 16:04:45.980068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:34.662 [2024-11-29 16:04:45.980074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:34.662 [2024-11-29 16:04:45.980084] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:34.662 [2024-11-29 16:04:45.980089] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:34.662 [2024-11-29 16:04:45.980096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:34.662 [2024-11-29 16:04:45.980101] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:34.662 [2024-11-29 16:04:45.980110] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:34.662 [2024-11-29 16:04:45.980115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:34.662 [2024-11-29 16:04:45.980123] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:34.662 [2024-11-29 16:04:45.980131] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:34.662 [2024-11-29 16:04:45.980139] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:34.662 [2024-11-29 16:04:45.980146] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:34.662 [2024-11-29 16:04:45.980154] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:34.662 [2024-11-29 16:04:45.980160] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:34.662 [2024-11-29 16:04:45.980167] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:34.662 [2024-11-29 16:04:45.980173] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:34.662 [2024-11-29 16:04:45.980181] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:34.662 [2024-11-29 16:04:45.980186] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:34.662 [2024-11-29 16:04:45.980194] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:34.662 [2024-11-29 16:04:45.980199] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:34.662 [2024-11-29 16:04:45.980207] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:34.662 [2024-11-29 16:04:45.980213] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:34.662 [2024-11-29 16:04:45.980223] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:34.662 [2024-11-29 16:04:45.980229] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:34.662 [2024-11-29 16:04:45.980236] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:34.662 [2024-11-29 16:04:45.980243] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:34.662 [2024-11-29 16:04:45.980251] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:34.662 [2024-11-29 16:04:45.980256] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:34.662 [2024-11-29 16:04:45.980263] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:34.662 [2024-11-29 16:04:45.980269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.662 [2024-11-29 16:04:45.980276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:34.662 [2024-11-29 16:04:45.980283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:22:34.662 [2024-11-29 16:04:45.980290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.662 [2024-11-29 16:04:45.994432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.662 [2024-11-29 16:04:45.994471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:34.662 [2024-11-29 16:04:45.994479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.109 ms 00:22:34.662 [2024-11-29 16:04:45.994486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.662 [2024-11-29 16:04:45.994560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.662 [2024-11-29 16:04:45.994571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:34.662 [2024-11-29 16:04:45.994578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:34.662 [2024-11-29 16:04:45.994584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.662 [2024-11-29 16:04:46.021036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.662 [2024-11-29 16:04:46.021073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:34.662 [2024-11-29 16:04:46.021083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.416 ms 00:22:34.662 [2024-11-29 16:04:46.021095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.662 [2024-11-29 16:04:46.021119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.662 [2024-11-29 16:04:46.021128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:34.662 [2024-11-29 16:04:46.021135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:34.662 [2024-11-29 16:04:46.021147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.662 [2024-11-29 16:04:46.021519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.662 [2024-11-29 16:04:46.021548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:34.662 [2024-11-29 16:04:46.021555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:22:34.662 [2024-11-29 16:04:46.021563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.662 [2024-11-29 16:04:46.021652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.662 [2024-11-29 16:04:46.021668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:34.662 [2024-11-29 16:04:46.021674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:22:34.662 [2024-11-29 16:04:46.021682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.662 [2024-11-29 16:04:46.034929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.662 [2024-11-29 16:04:46.034961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:34.662 [2024-11-29 16:04:46.034979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.220 ms 00:22:34.662 [2024-11-29 16:04:46.034991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.662 [2024-11-29 16:04:46.044354] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:34.662 [2024-11-29 16:04:46.046825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.662 [2024-11-29 16:04:46.046851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:34.663 [2024-11-29 16:04:46.046861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.775 ms 00:22:34.663 [2024-11-29 16:04:46.046868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.921 [2024-11-29 16:04:46.111042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.921 [2024-11-29 16:04:46.111079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:34.921 [2024-11-29 16:04:46.111092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.151 ms 00:22:34.921 [2024-11-29 16:04:46.111099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.921 [2024-11-29 16:04:46.111133] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:22:34.921 [2024-11-29 16:04:46.111143] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:22:38.215 [2024-11-29 16:04:49.195350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.215 [2024-11-29 16:04:49.195428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:38.215 [2024-11-29 16:04:49.195450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3084.195 ms 00:22:38.215 [2024-11-29 16:04:49.195460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.215 [2024-11-29 16:04:49.195693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.215 [2024-11-29 16:04:49.195710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:38.215 [2024-11-29 16:04:49.195722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:22:38.215 [2024-11-29 16:04:49.195730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.215 [2024-11-29 16:04:49.222731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.215 [2024-11-29 16:04:49.222783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:38.215 [2024-11-29 16:04:49.222801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.941 ms 00:22:38.215 [2024-11-29 16:04:49.222809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.215 [2024-11-29 16:04:49.248301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.215 [2024-11-29 16:04:49.248352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:38.215 [2024-11-29 16:04:49.248372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.427 ms 00:22:38.215 [2024-11-29 16:04:49.248379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.215 [2024-11-29 16:04:49.248736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.215 [2024-11-29 16:04:49.248796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:38.215 [2024-11-29 16:04:49.248809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:22:38.215 [2024-11-29 16:04:49.248822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.215 [2024-11-29 16:04:49.320751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.215 [2024-11-29 16:04:49.320804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:38.215 [2024-11-29 16:04:49.320822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.880 ms 00:22:38.215 [2024-11-29 16:04:49.320830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.215 [2024-11-29 16:04:49.349125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.215 [2024-11-29 16:04:49.349174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:38.215 [2024-11-29 16:04:49.349191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.237 ms 00:22:38.215 [2024-11-29 16:04:49.349200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.215 [2024-11-29 16:04:49.350730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.215 [2024-11-29 16:04:49.350784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:38.215 [2024-11-29 16:04:49.350801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.474 ms 00:22:38.215 [2024-11-29 16:04:49.350809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.215 [2024-11-29 16:04:49.377604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.215 [2024-11-29 16:04:49.377660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:38.215 [2024-11-29 16:04:49.377678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.727 ms 00:22:38.215 [2024-11-29 16:04:49.377686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.215 [2024-11-29 16:04:49.377765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.215 [2024-11-29 16:04:49.377777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:38.215 [2024-11-29 16:04:49.377789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:38.215 [2024-11-29 16:04:49.377798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.215 [2024-11-29 16:04:49.377899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.215 [2024-11-29 16:04:49.377912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:38.215 [2024-11-29 16:04:49.377923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:38.215 [2024-11-29 16:04:49.377931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.215 [2024-11-29 16:04:49.379614] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3414.284 ms, result 0 00:22:38.215 { 00:22:38.215 "name": "ftl0", 00:22:38.215 "uuid": "6de4d882-07fd-4d7b-8ead-4dd2e9318b0e" 00:22:38.215 } 00:22:38.215 16:04:49 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:38.215 16:04:49 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:38.215 16:04:49 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:38.215 16:04:49 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:38.215 16:04:49 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:38.476 /dev/nbd0 00:22:38.476 16:04:49 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:38.476 16:04:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:38.476 16:04:49 -- common/autotest_common.sh@867 -- # local i 00:22:38.476 16:04:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:38.476 16:04:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:38.476 16:04:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:38.476 16:04:49 -- common/autotest_common.sh@871 -- # break 00:22:38.476 16:04:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:38.476 16:04:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:38.476 16:04:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:38.476 1+0 records in 00:22:38.476 1+0 records out 00:22:38.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518001 s, 7.9 MB/s 00:22:38.476 16:04:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:38.476 16:04:49 -- common/autotest_common.sh@884 -- # size=4096 00:22:38.476 16:04:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:38.476 16:04:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:38.476 16:04:49 -- common/autotest_common.sh@887 -- # return 0 00:22:38.476 16:04:49 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:38.736 [2024-11-29 16:04:49.909151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:38.736 [2024-11-29 16:04:49.909303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76023 ] 00:22:38.736 [2024-11-29 16:04:50.063194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.996 [2024-11-29 16:04:50.287911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.375  [2024-11-29T16:04:52.741Z] Copying: 205/1024 [MB] (205 MBps) [2024-11-29T16:04:53.676Z] Copying: 466/1024 [MB] (261 MBps) [2024-11-29T16:04:54.610Z] Copying: 725/1024 [MB] (259 MBps) [2024-11-29T16:04:54.868Z] Copying: 978/1024 [MB] (252 MBps) [2024-11-29T16:04:55.433Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:22:44.002 00:22:44.002 16:04:55 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:46.543 16:04:57 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:46.543 [2024-11-29 16:04:57.596423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:46.543 [2024-11-29 16:04:57.596526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76109 ] 00:22:46.543 [2024-11-29 16:04:57.745913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.543 [2024-11-29 16:04:57.917823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.929  [2024-11-29T16:05:00.293Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-29T16:05:01.227Z] Copying: 49/1024 [MB] (32 MBps) [2024-11-29T16:05:02.161Z] Copying: 79/1024 [MB] (29 MBps) [2024-11-29T16:05:03.532Z] Copying: 106/1024 [MB] (27 MBps) [2024-11-29T16:05:04.467Z] Copying: 136/1024 [MB] (29 MBps) [2024-11-29T16:05:05.399Z] Copying: 166/1024 [MB] (29 MBps) [2024-11-29T16:05:06.331Z] Copying: 197/1024 [MB] (31 MBps) [2024-11-29T16:05:07.326Z] Copying: 232/1024 [MB] (34 MBps) [2024-11-29T16:05:08.265Z] Copying: 262/1024 [MB] (30 MBps) [2024-11-29T16:05:09.197Z] Copying: 292/1024 [MB] (30 MBps) [2024-11-29T16:05:10.570Z] Copying: 323/1024 [MB] (30 MBps) [2024-11-29T16:05:11.135Z] Copying: 354/1024 [MB] (31 MBps) [2024-11-29T16:05:12.509Z] Copying: 390/1024 [MB] (35 MBps) [2024-11-29T16:05:13.443Z] Copying: 425/1024 [MB] (35 MBps) [2024-11-29T16:05:14.378Z] Copying: 461/1024 [MB] (36 MBps) [2024-11-29T16:05:15.314Z] Copying: 493/1024 [MB] (31 MBps) [2024-11-29T16:05:16.246Z] Copying: 526/1024 [MB] (33 MBps) [2024-11-29T16:05:17.181Z] Copying: 557/1024 [MB] (30 MBps) [2024-11-29T16:05:18.552Z] Copying: 587/1024 [MB] (30 MBps) [2024-11-29T16:05:19.485Z] Copying: 620/1024 [MB] (33 MBps) [2024-11-29T16:05:20.419Z] Copying: 652/1024 [MB] (31 MBps) [2024-11-29T16:05:21.352Z] Copying: 688/1024 [MB] (35 MBps) [2024-11-29T16:05:22.281Z] Copying: 724/1024 [MB] (35 MBps) [2024-11-29T16:05:23.211Z] Copying: 759/1024 [MB] (35 MBps) [2024-11-29T16:05:24.141Z] Copying: 794/1024 [MB] (34 MBps) [2024-11-29T16:05:25.511Z] Copying: 823/1024 [MB] (29 MBps) [2024-11-29T16:05:26.443Z] Copying: 854/1024 [MB] (31 MBps) [2024-11-29T16:05:27.375Z] Copying: 889/1024 [MB] (34 MBps) [2024-11-29T16:05:28.308Z] Copying: 924/1024 [MB] (35 MBps) [2024-11-29T16:05:29.242Z] Copying: 960/1024 [MB] (35 MBps) [2024-11-29T16:05:30.175Z] Copying: 991/1024 [MB] (30 MBps) [2024-11-29T16:05:30.742Z] Copying: 1024/1024 [MB] (average 32 MBps) 00:23:19.311 00:23:19.311 16:05:30 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:19.311 16:05:30 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:19.571 16:05:30 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:19.834 [2024-11-29 16:05:31.014880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.834 [2024-11-29 16:05:31.014930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:19.834 [2024-11-29 16:05:31.014943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:19.834 [2024-11-29 16:05:31.014952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.834 [2024-11-29 16:05:31.014980] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:19.834 [2024-11-29 16:05:31.017180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.834 [2024-11-29 16:05:31.017203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:19.834 [2024-11-29 16:05:31.017214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.183 ms 00:23:19.834 [2024-11-29 16:05:31.017222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.834 [2024-11-29 16:05:31.018919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.834 [2024-11-29 16:05:31.018950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:19.834 [2024-11-29 16:05:31.018960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:23:19.834 [2024-11-29 16:05:31.018967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.834 [2024-11-29 16:05:31.031570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.834 [2024-11-29 16:05:31.031595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:19.834 [2024-11-29 16:05:31.031605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.573 ms 00:23:19.834 [2024-11-29 16:05:31.031612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.834 [2024-11-29 16:05:31.036280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.834 [2024-11-29 16:05:31.036300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:19.834 [2024-11-29 16:05:31.036312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.637 ms 00:23:19.834 [2024-11-29 16:05:31.036318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.834 [2024-11-29 16:05:31.055729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.835 [2024-11-29 16:05:31.055753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:19.835 [2024-11-29 16:05:31.055764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.359 ms 00:23:19.835 [2024-11-29 16:05:31.055770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.835 [2024-11-29 16:05:31.069115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.835 [2024-11-29 16:05:31.069141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:19.835 [2024-11-29 16:05:31.069153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.312 ms 00:23:19.835 [2024-11-29 16:05:31.069160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.835 [2024-11-29 16:05:31.069278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.835 [2024-11-29 16:05:31.069287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:19.835 [2024-11-29 16:05:31.069296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:23:19.835 [2024-11-29 16:05:31.069302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.835 [2024-11-29 16:05:31.087837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.835 [2024-11-29 16:05:31.087860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:19.835 [2024-11-29 16:05:31.087871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.517 ms 00:23:19.835 [2024-11-29 16:05:31.087876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.835 [2024-11-29 16:05:31.106082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.835 [2024-11-29 16:05:31.106104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:19.835 [2024-11-29 16:05:31.106114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.174 ms 00:23:19.835 [2024-11-29 16:05:31.106120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.835 [2024-11-29 16:05:31.123632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.835 [2024-11-29 16:05:31.123654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:19.835 [2024-11-29 16:05:31.123664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.480 ms 00:23:19.835 [2024-11-29 16:05:31.123670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.835 [2024-11-29 16:05:31.140733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.835 [2024-11-29 16:05:31.140755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:19.835 [2024-11-29 16:05:31.140766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.001 ms 00:23:19.835 [2024-11-29 16:05:31.140771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.835 [2024-11-29 16:05:31.140803] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:19.835 [2024-11-29 16:05:31.140814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.140996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:19.835 [2024-11-29 16:05:31.141302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:19.836 [2024-11-29 16:05:31.141541] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:19.836 [2024-11-29 16:05:31.141550] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6de4d882-07fd-4d7b-8ead-4dd2e9318b0e 00:23:19.836 [2024-11-29 16:05:31.141556] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:19.836 [2024-11-29 16:05:31.141563] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:19.836 [2024-11-29 16:05:31.141569] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:19.836 [2024-11-29 16:05:31.141576] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:19.836 [2024-11-29 16:05:31.141582] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:19.836 [2024-11-29 16:05:31.141589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:19.836 [2024-11-29 16:05:31.141595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:19.836 [2024-11-29 16:05:31.141601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:19.836 [2024-11-29 16:05:31.141606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:19.836 [2024-11-29 16:05:31.141614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.836 [2024-11-29 16:05:31.141620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:19.836 [2024-11-29 16:05:31.141629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:23:19.836 [2024-11-29 16:05:31.141634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.836 [2024-11-29 16:05:31.151866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.836 [2024-11-29 16:05:31.151889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:19.836 [2024-11-29 16:05:31.151898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.203 ms 00:23:19.836 [2024-11-29 16:05:31.151905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.836 [2024-11-29 16:05:31.152078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.836 [2024-11-29 16:05:31.152086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:19.836 [2024-11-29 16:05:31.152094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:23:19.836 [2024-11-29 16:05:31.152102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.836 [2024-11-29 16:05:31.189370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.836 [2024-11-29 16:05:31.189395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:19.836 [2024-11-29 16:05:31.189405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.836 [2024-11-29 16:05:31.189412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.836 [2024-11-29 16:05:31.189466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.836 [2024-11-29 16:05:31.189472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:19.836 [2024-11-29 16:05:31.189480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.836 [2024-11-29 16:05:31.189488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.836 [2024-11-29 16:05:31.189539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.836 [2024-11-29 16:05:31.189548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:19.836 [2024-11-29 16:05:31.189556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.836 [2024-11-29 16:05:31.189562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.836 [2024-11-29 16:05:31.189577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.836 [2024-11-29 16:05:31.189583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:19.836 [2024-11-29 16:05:31.189590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.836 [2024-11-29 16:05:31.189596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.836 [2024-11-29 16:05:31.251390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.836 [2024-11-29 16:05:31.251422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:19.836 [2024-11-29 16:05:31.251433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.836 [2024-11-29 16:05:31.251440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.098 [2024-11-29 16:05:31.275450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.098 [2024-11-29 16:05:31.275476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:20.098 [2024-11-29 16:05:31.275486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.098 [2024-11-29 16:05:31.275494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.098 [2024-11-29 16:05:31.275553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.098 [2024-11-29 16:05:31.275560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:20.098 [2024-11-29 16:05:31.275569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.098 [2024-11-29 16:05:31.275574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.098 [2024-11-29 16:05:31.275613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.098 [2024-11-29 16:05:31.275621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:20.098 [2024-11-29 16:05:31.275629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.098 [2024-11-29 16:05:31.275635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.098 [2024-11-29 16:05:31.275716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.098 [2024-11-29 16:05:31.275724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:20.098 [2024-11-29 16:05:31.275733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.098 [2024-11-29 16:05:31.275739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.098 [2024-11-29 16:05:31.275773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.098 [2024-11-29 16:05:31.275780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:20.098 [2024-11-29 16:05:31.275788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.098 [2024-11-29 16:05:31.275794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.098 [2024-11-29 16:05:31.275832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.098 [2024-11-29 16:05:31.275839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:20.098 [2024-11-29 16:05:31.275847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.098 [2024-11-29 16:05:31.275853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.098 [2024-11-29 16:05:31.275894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.098 [2024-11-29 16:05:31.275908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:20.098 [2024-11-29 16:05:31.275917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.098 [2024-11-29 16:05:31.275923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.098 [2024-11-29 16:05:31.276056] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 261.137 ms, result 0 00:23:20.098 true 00:23:20.098 16:05:31 -- ftl/dirty_shutdown.sh@83 -- # kill -9 75877 00:23:20.098 16:05:31 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75877 00:23:20.098 16:05:31 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:20.098 [2024-11-29 16:05:31.359722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:20.098 [2024-11-29 16:05:31.359831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76466 ] 00:23:20.098 [2024-11-29 16:05:31.507359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.359 [2024-11-29 16:05:31.680528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.741  [2024-11-29T16:05:34.116Z] Copying: 255/1024 [MB] (255 MBps) [2024-11-29T16:05:35.088Z] Copying: 512/1024 [MB] (257 MBps) [2024-11-29T16:05:36.029Z] Copying: 765/1024 [MB] (253 MBps) [2024-11-29T16:05:36.029Z] Copying: 1017/1024 [MB] (252 MBps) [2024-11-29T16:05:36.602Z] Copying: 1024/1024 [MB] (average 254 MBps) 00:23:25.171 00:23:25.171 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75877 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:25.171 16:05:36 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:25.432 [2024-11-29 16:05:36.642140] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:25.432 [2024-11-29 16:05:36.642254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76522 ] 00:23:25.432 [2024-11-29 16:05:36.789423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.694 [2024-11-29 16:05:36.951370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.955 [2024-11-29 16:05:37.182474] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:25.955 [2024-11-29 16:05:37.182530] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:25.955 [2024-11-29 16:05:37.243120] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:25.955 [2024-11-29 16:05:37.243748] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:25.955 [2024-11-29 16:05:37.244310] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:26.530 [2024-11-29 16:05:37.696532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.530 [2024-11-29 16:05:37.696567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:26.530 [2024-11-29 16:05:37.696578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:26.530 [2024-11-29 16:05:37.696585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.530 [2024-11-29 16:05:37.696625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.530 [2024-11-29 16:05:37.696633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:26.530 [2024-11-29 16:05:37.696641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:26.530 [2024-11-29 16:05:37.696647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.530 [2024-11-29 16:05:37.696661] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:26.530 [2024-11-29 16:05:37.697233] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:26.530 [2024-11-29 16:05:37.697253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.530 [2024-11-29 16:05:37.697259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:26.530 [2024-11-29 16:05:37.697266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:23:26.530 [2024-11-29 16:05:37.697271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.530 [2024-11-29 16:05:37.698545] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:26.530 [2024-11-29 16:05:37.709329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.530 [2024-11-29 16:05:37.709358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:26.530 [2024-11-29 16:05:37.709367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.785 ms 00:23:26.530 [2024-11-29 16:05:37.709373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.530 [2024-11-29 16:05:37.709419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.530 [2024-11-29 16:05:37.709429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:26.530 [2024-11-29 16:05:37.709436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:26.530 [2024-11-29 16:05:37.709442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.530 [2024-11-29 16:05:37.715727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.530 [2024-11-29 16:05:37.715752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:26.530 [2024-11-29 16:05:37.715760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.246 ms 00:23:26.530 [2024-11-29 16:05:37.715766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.530 [2024-11-29 16:05:37.715835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.530 [2024-11-29 16:05:37.715842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:26.531 [2024-11-29 16:05:37.715848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:26.531 [2024-11-29 16:05:37.715853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.531 [2024-11-29 16:05:37.715891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.531 [2024-11-29 16:05:37.715899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:26.531 [2024-11-29 16:05:37.715905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:26.531 [2024-11-29 16:05:37.715913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.531 [2024-11-29 16:05:37.715934] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:26.531 [2024-11-29 16:05:37.719060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.531 [2024-11-29 16:05:37.719083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:26.531 [2024-11-29 16:05:37.719091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.135 ms 00:23:26.531 [2024-11-29 16:05:37.719097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.531 [2024-11-29 16:05:37.719127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.531 [2024-11-29 16:05:37.719133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:26.531 [2024-11-29 16:05:37.719139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:26.531 [2024-11-29 16:05:37.719144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.531 [2024-11-29 16:05:37.719159] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:26.531 [2024-11-29 16:05:37.719175] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:26.531 [2024-11-29 16:05:37.719202] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:26.531 [2024-11-29 16:05:37.719216] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:26.531 [2024-11-29 16:05:37.719275] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:26.531 [2024-11-29 16:05:37.719284] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:26.531 [2024-11-29 16:05:37.719292] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:26.531 [2024-11-29 16:05:37.719300] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:26.531 [2024-11-29 16:05:37.719306] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:26.531 [2024-11-29 16:05:37.719312] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:26.531 [2024-11-29 16:05:37.719318] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:26.531 [2024-11-29 16:05:37.719323] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:26.531 [2024-11-29 16:05:37.719329] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:26.531 [2024-11-29 16:05:37.719336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.531 [2024-11-29 16:05:37.719343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:26.531 [2024-11-29 16:05:37.719349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:23:26.531 [2024-11-29 16:05:37.719354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.531 [2024-11-29 16:05:37.719400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.531 [2024-11-29 16:05:37.719407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:26.531 [2024-11-29 16:05:37.719414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:26.531 [2024-11-29 16:05:37.719418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.531 [2024-11-29 16:05:37.719472] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:26.531 [2024-11-29 16:05:37.719480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:26.531 [2024-11-29 16:05:37.719488] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:26.531 [2024-11-29 16:05:37.719494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.531 [2024-11-29 16:05:37.719501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:26.531 [2024-11-29 16:05:37.719507] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:26.531 [2024-11-29 16:05:37.719513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:26.531 [2024-11-29 16:05:37.719518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:26.531 [2024-11-29 16:05:37.719523] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:26.531 [2024-11-29 16:05:37.719534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:26.531 [2024-11-29 16:05:37.719539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:26.531 [2024-11-29 16:05:37.719544] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:26.531 [2024-11-29 16:05:37.719554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:26.531 [2024-11-29 16:05:37.719559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:26.531 [2024-11-29 16:05:37.719564] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:26.531 [2024-11-29 16:05:37.719569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.531 [2024-11-29 16:05:37.719574] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:26.531 [2024-11-29 16:05:37.719578] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:26.531 [2024-11-29 16:05:37.719583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.531 [2024-11-29 16:05:37.719589] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:26.531 [2024-11-29 16:05:37.719594] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:26.531 [2024-11-29 16:05:37.719600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:26.531 [2024-11-29 16:05:37.719605] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:26.531 [2024-11-29 16:05:37.719610] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:26.531 [2024-11-29 16:05:37.719614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:26.531 [2024-11-29 16:05:37.719619] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:26.531 [2024-11-29 16:05:37.719624] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:26.531 [2024-11-29 16:05:37.719628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:26.531 [2024-11-29 16:05:37.719633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:26.531 [2024-11-29 16:05:37.719638] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:26.531 [2024-11-29 16:05:37.719642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:26.531 [2024-11-29 16:05:37.719648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:26.531 [2024-11-29 16:05:37.719653] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:26.531 [2024-11-29 16:05:37.719658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:26.531 [2024-11-29 16:05:37.719663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:26.531 [2024-11-29 16:05:37.719667] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:26.531 [2024-11-29 16:05:37.719672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:26.531 [2024-11-29 16:05:37.719677] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:26.531 [2024-11-29 16:05:37.719683] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:26.531 [2024-11-29 16:05:37.719688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:26.531 [2024-11-29 16:05:37.719693] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:26.531 [2024-11-29 16:05:37.719702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:26.531 [2024-11-29 16:05:37.719708] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:26.531 [2024-11-29 16:05:37.719713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.531 [2024-11-29 16:05:37.719719] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:26.531 [2024-11-29 16:05:37.719724] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:26.531 [2024-11-29 16:05:37.719729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:26.531 [2024-11-29 16:05:37.719735] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:26.531 [2024-11-29 16:05:37.719741] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:26.531 [2024-11-29 16:05:37.719746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:26.531 [2024-11-29 16:05:37.719752] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:26.531 [2024-11-29 16:05:37.719759] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:26.531 [2024-11-29 16:05:37.719766] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:26.531 [2024-11-29 16:05:37.719771] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:26.531 [2024-11-29 16:05:37.719776] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:26.531 [2024-11-29 16:05:37.719782] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:26.531 [2024-11-29 16:05:37.719787] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:26.531 [2024-11-29 16:05:37.719792] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:26.531 [2024-11-29 16:05:37.719798] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:26.531 [2024-11-29 16:05:37.719803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:26.531 [2024-11-29 16:05:37.719808] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:26.531 [2024-11-29 16:05:37.719814] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:26.531 [2024-11-29 16:05:37.719819] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:26.531 [2024-11-29 16:05:37.719825] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:26.532 [2024-11-29 16:05:37.719831] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:26.532 [2024-11-29 16:05:37.719837] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:26.532 [2024-11-29 16:05:37.719844] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:26.532 [2024-11-29 16:05:37.719852] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:26.532 [2024-11-29 16:05:37.719858] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:26.532 [2024-11-29 16:05:37.719865] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:26.532 [2024-11-29 16:05:37.719870] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:26.532 [2024-11-29 16:05:37.719876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.719883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:26.532 [2024-11-29 16:05:37.719892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:23:26.532 [2024-11-29 16:05:37.719898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.733858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.733886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:26.532 [2024-11-29 16:05:37.733895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.924 ms 00:23:26.532 [2024-11-29 16:05:37.733901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.733989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.733998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:26.532 [2024-11-29 16:05:37.734005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:26.532 [2024-11-29 16:05:37.734012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.776459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.776491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:26.532 [2024-11-29 16:05:37.776501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.413 ms 00:23:26.532 [2024-11-29 16:05:37.776509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.776544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.776552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:26.532 [2024-11-29 16:05:37.776560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:26.532 [2024-11-29 16:05:37.776568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.777007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.777026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:26.532 [2024-11-29 16:05:37.777033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:23:26.532 [2024-11-29 16:05:37.777039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.777136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.777144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:26.532 [2024-11-29 16:05:37.777151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:26.532 [2024-11-29 16:05:37.777157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.790005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.790026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:26.532 [2024-11-29 16:05:37.790034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.831 ms 00:23:26.532 [2024-11-29 16:05:37.790040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.800727] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:26.532 [2024-11-29 16:05:37.800751] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:26.532 [2024-11-29 16:05:37.800759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.800766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:26.532 [2024-11-29 16:05:37.800773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.645 ms 00:23:26.532 [2024-11-29 16:05:37.800778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.819816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.819839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:26.532 [2024-11-29 16:05:37.819852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.005 ms 00:23:26.532 [2024-11-29 16:05:37.819859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.829218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.829240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:26.532 [2024-11-29 16:05:37.829247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.326 ms 00:23:26.532 [2024-11-29 16:05:37.829260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.838427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.838449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:26.532 [2024-11-29 16:05:37.838457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.139 ms 00:23:26.532 [2024-11-29 16:05:37.838463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.838731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.838740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:26.532 [2024-11-29 16:05:37.838747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:23:26.532 [2024-11-29 16:05:37.838753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.887875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.887903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:26.532 [2024-11-29 16:05:37.887913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.108 ms 00:23:26.532 [2024-11-29 16:05:37.887919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.896099] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:26.532 [2024-11-29 16:05:37.898252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.898272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:26.532 [2024-11-29 16:05:37.898281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.297 ms 00:23:26.532 [2024-11-29 16:05:37.898288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.898332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.898340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:26.532 [2024-11-29 16:05:37.898347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:26.532 [2024-11-29 16:05:37.898353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.898402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.898411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:26.532 [2024-11-29 16:05:37.898418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:26.532 [2024-11-29 16:05:37.898424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.899461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.899480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:26.532 [2024-11-29 16:05:37.899488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:23:26.532 [2024-11-29 16:05:37.899498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.899520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.899527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:26.532 [2024-11-29 16:05:37.899535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:26.532 [2024-11-29 16:05:37.899540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.899568] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:26.532 [2024-11-29 16:05:37.899576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.899582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:26.532 [2024-11-29 16:05:37.899588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:26.532 [2024-11-29 16:05:37.899594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.918589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.918618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:26.532 [2024-11-29 16:05:37.918626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.980 ms 00:23:26.532 [2024-11-29 16:05:37.918633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.918687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.532 [2024-11-29 16:05:37.918694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:26.532 [2024-11-29 16:05:37.918701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:26.532 [2024-11-29 16:05:37.918707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.532 [2024-11-29 16:05:37.919629] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 222.727 ms, result 0 00:23:27.929  [2024-11-29T16:05:40.304Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-29T16:05:41.250Z] Copying: 22/1024 [MB] (11 MBps) [2024-11-29T16:05:42.194Z] Copying: 33/1024 [MB] (11 MBps) [2024-11-29T16:05:43.138Z] Copying: 44/1024 [MB] (11 MBps) [2024-11-29T16:05:44.081Z] Copying: 56/1024 [MB] (11 MBps) [2024-11-29T16:05:45.026Z] Copying: 67/1024 [MB] (11 MBps) [2024-11-29T16:05:45.979Z] Copying: 78/1024 [MB] (11 MBps) [2024-11-29T16:05:47.368Z] Copying: 89/1024 [MB] (11 MBps) [2024-11-29T16:05:47.941Z] Copying: 101/1024 [MB] (11 MBps) [2024-11-29T16:05:49.327Z] Copying: 112/1024 [MB] (11 MBps) [2024-11-29T16:05:50.271Z] Copying: 123/1024 [MB] (10 MBps) [2024-11-29T16:05:51.213Z] Copying: 134/1024 [MB] (10 MBps) [2024-11-29T16:05:52.157Z] Copying: 145/1024 [MB] (11 MBps) [2024-11-29T16:05:53.101Z] Copying: 156/1024 [MB] (11 MBps) [2024-11-29T16:05:54.046Z] Copying: 167/1024 [MB] (11 MBps) [2024-11-29T16:05:54.989Z] Copying: 178/1024 [MB] (11 MBps) [2024-11-29T16:05:56.374Z] Copying: 189/1024 [MB] (11 MBps) [2024-11-29T16:05:56.948Z] Copying: 200/1024 [MB] (11 MBps) [2024-11-29T16:05:58.338Z] Copying: 211/1024 [MB] (11 MBps) [2024-11-29T16:05:59.301Z] Copying: 222/1024 [MB] (11 MBps) [2024-11-29T16:06:00.245Z] Copying: 234/1024 [MB] (11 MBps) [2024-11-29T16:06:01.188Z] Copying: 245/1024 [MB] (11 MBps) [2024-11-29T16:06:02.133Z] Copying: 257/1024 [MB] (11 MBps) [2024-11-29T16:06:03.077Z] Copying: 268/1024 [MB] (11 MBps) [2024-11-29T16:06:04.021Z] Copying: 280/1024 [MB] (11 MBps) [2024-11-29T16:06:04.963Z] Copying: 291/1024 [MB] (11 MBps) [2024-11-29T16:06:06.350Z] Copying: 302/1024 [MB] (10 MBps) [2024-11-29T16:06:07.295Z] Copying: 312/1024 [MB] (10 MBps) [2024-11-29T16:06:08.238Z] Copying: 323/1024 [MB] (11 MBps) [2024-11-29T16:06:09.180Z] Copying: 341716/1048576 [kB] (10200 kBps) [2024-11-29T16:06:10.125Z] Copying: 351864/1048576 [kB] (10148 kBps) [2024-11-29T16:06:11.070Z] Copying: 361952/1048576 [kB] (10088 kBps) [2024-11-29T16:06:12.048Z] Copying: 363/1024 [MB] (10 MBps) [2024-11-29T16:06:13.022Z] Copying: 373/1024 [MB] (10 MBps) [2024-11-29T16:06:13.967Z] Copying: 404/1024 [MB] (30 MBps) [2024-11-29T16:06:15.356Z] Copying: 423/1024 [MB] (19 MBps) [2024-11-29T16:06:16.299Z] Copying: 441/1024 [MB] (18 MBps) [2024-11-29T16:06:17.245Z] Copying: 466/1024 [MB] (24 MBps) [2024-11-29T16:06:18.189Z] Copying: 489/1024 [MB] (22 MBps) [2024-11-29T16:06:19.135Z] Copying: 500/1024 [MB] (11 MBps) [2024-11-29T16:06:20.082Z] Copying: 520/1024 [MB] (20 MBps) [2024-11-29T16:06:21.027Z] Copying: 536/1024 [MB] (15 MBps) [2024-11-29T16:06:21.972Z] Copying: 549/1024 [MB] (13 MBps) [2024-11-29T16:06:23.362Z] Copying: 560/1024 [MB] (10 MBps) [2024-11-29T16:06:23.936Z] Copying: 570/1024 [MB] (10 MBps) [2024-11-29T16:06:25.325Z] Copying: 581/1024 [MB] (10 MBps) [2024-11-29T16:06:26.268Z] Copying: 591/1024 [MB] (10 MBps) [2024-11-29T16:06:27.214Z] Copying: 602/1024 [MB] (10 MBps) [2024-11-29T16:06:28.155Z] Copying: 612/1024 [MB] (10 MBps) [2024-11-29T16:06:29.100Z] Copying: 624/1024 [MB] (11 MBps) [2024-11-29T16:06:30.041Z] Copying: 634/1024 [MB] (10 MBps) [2024-11-29T16:06:30.982Z] Copying: 645/1024 [MB] (10 MBps) [2024-11-29T16:06:32.371Z] Copying: 669/1024 [MB] (23 MBps) [2024-11-29T16:06:32.945Z] Copying: 684/1024 [MB] (15 MBps) [2024-11-29T16:06:34.334Z] Copying: 696/1024 [MB] (12 MBps) [2024-11-29T16:06:35.288Z] Copying: 712/1024 [MB] (16 MBps) [2024-11-29T16:06:36.244Z] Copying: 731/1024 [MB] (19 MBps) [2024-11-29T16:06:37.186Z] Copying: 747/1024 [MB] (15 MBps) [2024-11-29T16:06:38.129Z] Copying: 769/1024 [MB] (21 MBps) [2024-11-29T16:06:39.067Z] Copying: 796/1024 [MB] (26 MBps) [2024-11-29T16:06:40.011Z] Copying: 832/1024 [MB] (35 MBps) [2024-11-29T16:06:40.950Z] Copying: 855/1024 [MB] (23 MBps) [2024-11-29T16:06:42.337Z] Copying: 880/1024 [MB] (25 MBps) [2024-11-29T16:06:43.281Z] Copying: 901/1024 [MB] (20 MBps) [2024-11-29T16:06:44.224Z] Copying: 917/1024 [MB] (16 MBps) [2024-11-29T16:06:45.168Z] Copying: 932/1024 [MB] (14 MBps) [2024-11-29T16:06:46.181Z] Copying: 951/1024 [MB] (19 MBps) [2024-11-29T16:06:47.150Z] Copying: 972/1024 [MB] (21 MBps) [2024-11-29T16:06:48.093Z] Copying: 987/1024 [MB] (14 MBps) [2024-11-29T16:06:49.038Z] Copying: 1004/1024 [MB] (16 MBps) [2024-11-29T16:06:49.985Z] Copying: 1021/1024 [MB] (17 MBps) [2024-11-29T16:06:49.985Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-29 16:06:49.921756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.554 [2024-11-29 16:06:49.921815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:38.554 [2024-11-29 16:06:49.921830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:38.554 [2024-11-29 16:06:49.921838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.554 [2024-11-29 16:06:49.927110] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:38.554 [2024-11-29 16:06:49.930400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.554 [2024-11-29 16:06:49.930518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:38.554 [2024-11-29 16:06:49.930582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:24:38.554 [2024-11-29 16:06:49.930606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.554 [2024-11-29 16:06:49.940870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.554 [2024-11-29 16:06:49.940990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:38.554 [2024-11-29 16:06:49.941047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.982 ms 00:24:38.554 [2024-11-29 16:06:49.941071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.554 [2024-11-29 16:06:49.962017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.554 [2024-11-29 16:06:49.962129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:38.554 [2024-11-29 16:06:49.962186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.915 ms 00:24:38.554 [2024-11-29 16:06:49.962210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.554 [2024-11-29 16:06:49.968711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.554 [2024-11-29 16:06:49.968821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:38.554 [2024-11-29 16:06:49.968874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.449 ms 00:24:38.554 [2024-11-29 16:06:49.968896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.815 [2024-11-29 16:06:49.994247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.815 [2024-11-29 16:06:49.994385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:38.815 [2024-11-29 16:06:49.994441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.294 ms 00:24:38.815 [2024-11-29 16:06:49.994464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.815 [2024-11-29 16:06:50.008843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.815 [2024-11-29 16:06:50.008988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:38.815 [2024-11-29 16:06:50.009047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.295 ms 00:24:38.815 [2024-11-29 16:06:50.009069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.815 [2024-11-29 16:06:50.188076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.816 [2024-11-29 16:06:50.188245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:38.816 [2024-11-29 16:06:50.188304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 178.900 ms 00:24:38.816 [2024-11-29 16:06:50.188317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.816 [2024-11-29 16:06:50.214522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.816 [2024-11-29 16:06:50.214570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:38.816 [2024-11-29 16:06:50.214581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.175 ms 00:24:38.816 [2024-11-29 16:06:50.214590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.816 [2024-11-29 16:06:50.240106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.816 [2024-11-29 16:06:50.240154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:38.816 [2024-11-29 16:06:50.240167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.470 ms 00:24:38.816 [2024-11-29 16:06:50.240174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.079 [2024-11-29 16:06:50.265407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.079 [2024-11-29 16:06:50.265578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:39.079 [2024-11-29 16:06:50.265598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.187 ms 00:24:39.079 [2024-11-29 16:06:50.265605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.079 [2024-11-29 16:06:50.290766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.079 [2024-11-29 16:06:50.290824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:39.079 [2024-11-29 16:06:50.290837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.054 ms 00:24:39.079 [2024-11-29 16:06:50.290844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.079 [2024-11-29 16:06:50.290890] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:39.079 [2024-11-29 16:06:50.290906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 91392 / 261120 wr_cnt: 1 state: open 00:24:39.079 [2024-11-29 16:06:50.290917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.290926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.290934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.290942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.290950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.290959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.290968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.290997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:39.079 [2024-11-29 16:06:50.291124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:39.080 [2024-11-29 16:06:50.291677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:39.081 [2024-11-29 16:06:50.291684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:39.081 [2024-11-29 16:06:50.291692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:39.081 [2024-11-29 16:06:50.291700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:39.081 [2024-11-29 16:06:50.291708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:39.081 [2024-11-29 16:06:50.291716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:39.081 [2024-11-29 16:06:50.291724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:39.081 [2024-11-29 16:06:50.291746] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:39.081 [2024-11-29 16:06:50.291757] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6de4d882-07fd-4d7b-8ead-4dd2e9318b0e 00:24:39.081 [2024-11-29 16:06:50.291766] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 91392 00:24:39.081 [2024-11-29 16:06:50.291774] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 92352 00:24:39.081 [2024-11-29 16:06:50.291781] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 91392 00:24:39.081 [2024-11-29 16:06:50.291797] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0105 00:24:39.081 [2024-11-29 16:06:50.291805] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:39.081 [2024-11-29 16:06:50.291813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:39.081 [2024-11-29 16:06:50.291821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:39.081 [2024-11-29 16:06:50.291827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:39.081 [2024-11-29 16:06:50.291834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:39.081 [2024-11-29 16:06:50.291842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.081 [2024-11-29 16:06:50.291849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:39.081 [2024-11-29 16:06:50.291857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:24:39.081 [2024-11-29 16:06:50.291864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.305622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.081 [2024-11-29 16:06:50.305667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:39.081 [2024-11-29 16:06:50.305679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.721 ms 00:24:39.081 [2024-11-29 16:06:50.305687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.305926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.081 [2024-11-29 16:06:50.305936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:39.081 [2024-11-29 16:06:50.305951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:24:39.081 [2024-11-29 16:06:50.305959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.344862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.081 [2024-11-29 16:06:50.345082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:39.081 [2024-11-29 16:06:50.345104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.081 [2024-11-29 16:06:50.345112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.345175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.081 [2024-11-29 16:06:50.345184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:39.081 [2024-11-29 16:06:50.345199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.081 [2024-11-29 16:06:50.345206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.345279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.081 [2024-11-29 16:06:50.345289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:39.081 [2024-11-29 16:06:50.345299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.081 [2024-11-29 16:06:50.345307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.345323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.081 [2024-11-29 16:06:50.345332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:39.081 [2024-11-29 16:06:50.345340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.081 [2024-11-29 16:06:50.345351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.427102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.081 [2024-11-29 16:06:50.427159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:39.081 [2024-11-29 16:06:50.427172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.081 [2024-11-29 16:06:50.427180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.458700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.081 [2024-11-29 16:06:50.458746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:39.081 [2024-11-29 16:06:50.458764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.081 [2024-11-29 16:06:50.458773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.458837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.081 [2024-11-29 16:06:50.458847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:39.081 [2024-11-29 16:06:50.458856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.081 [2024-11-29 16:06:50.458864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.458905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.081 [2024-11-29 16:06:50.458915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:39.081 [2024-11-29 16:06:50.458923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.081 [2024-11-29 16:06:50.458932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.459071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.081 [2024-11-29 16:06:50.459084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:39.081 [2024-11-29 16:06:50.459092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.081 [2024-11-29 16:06:50.459100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.459136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.081 [2024-11-29 16:06:50.459145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:39.081 [2024-11-29 16:06:50.459154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.081 [2024-11-29 16:06:50.459162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.459205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.081 [2024-11-29 16:06:50.459215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:39.081 [2024-11-29 16:06:50.459224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.081 [2024-11-29 16:06:50.459232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.459278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.081 [2024-11-29 16:06:50.459288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:39.081 [2024-11-29 16:06:50.459297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.081 [2024-11-29 16:06:50.459304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.081 [2024-11-29 16:06:50.459438] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 539.201 ms, result 0 00:24:40.999 00:24:40.999 00:24:40.999 16:06:52 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:43.543 16:06:54 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:43.543 [2024-11-29 16:06:54.421347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:43.543 [2024-11-29 16:06:54.421489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77323 ] 00:24:43.543 [2024-11-29 16:06:54.572516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.543 [2024-11-29 16:06:54.709539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.543 [2024-11-29 16:06:54.914295] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:43.543 [2024-11-29 16:06:54.914342] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:43.803 [2024-11-29 16:06:55.054624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.803 [2024-11-29 16:06:55.054758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:43.803 [2024-11-29 16:06:55.054774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:43.803 [2024-11-29 16:06:55.054783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.803 [2024-11-29 16:06:55.054821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.803 [2024-11-29 16:06:55.054829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:43.803 [2024-11-29 16:06:55.054835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:43.803 [2024-11-29 16:06:55.054840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.803 [2024-11-29 16:06:55.054855] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:43.803 [2024-11-29 16:06:55.055409] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:43.803 [2024-11-29 16:06:55.055421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.803 [2024-11-29 16:06:55.055427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:43.803 [2024-11-29 16:06:55.055433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:24:43.803 [2024-11-29 16:06:55.055438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.803 [2024-11-29 16:06:55.056363] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:43.803 [2024-11-29 16:06:55.065833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.803 [2024-11-29 16:06:55.065944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:43.803 [2024-11-29 16:06:55.065958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.471 ms 00:24:43.803 [2024-11-29 16:06:55.065964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.803 [2024-11-29 16:06:55.066014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.803 [2024-11-29 16:06:55.066022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:43.803 [2024-11-29 16:06:55.066028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:43.803 [2024-11-29 16:06:55.066033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.803 [2024-11-29 16:06:55.070375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.803 [2024-11-29 16:06:55.070399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:43.803 [2024-11-29 16:06:55.070406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.296 ms 00:24:43.803 [2024-11-29 16:06:55.070412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.803 [2024-11-29 16:06:55.070474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.803 [2024-11-29 16:06:55.070481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:43.803 [2024-11-29 16:06:55.070487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:43.803 [2024-11-29 16:06:55.070492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.803 [2024-11-29 16:06:55.070526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.803 [2024-11-29 16:06:55.070532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:43.803 [2024-11-29 16:06:55.070538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:43.803 [2024-11-29 16:06:55.070544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.803 [2024-11-29 16:06:55.070561] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:43.803 [2024-11-29 16:06:55.073296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.803 [2024-11-29 16:06:55.073319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:43.803 [2024-11-29 16:06:55.073326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.743 ms 00:24:43.803 [2024-11-29 16:06:55.073331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.803 [2024-11-29 16:06:55.073361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.803 [2024-11-29 16:06:55.073368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:43.803 [2024-11-29 16:06:55.073374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:43.803 [2024-11-29 16:06:55.073381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.803 [2024-11-29 16:06:55.073394] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:43.803 [2024-11-29 16:06:55.073408] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:43.803 [2024-11-29 16:06:55.073433] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:43.803 [2024-11-29 16:06:55.073443] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:43.803 [2024-11-29 16:06:55.073498] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:43.803 [2024-11-29 16:06:55.073505] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:43.803 [2024-11-29 16:06:55.073514] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:43.803 [2024-11-29 16:06:55.073522] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:43.803 [2024-11-29 16:06:55.073529] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:43.803 [2024-11-29 16:06:55.073535] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:43.803 [2024-11-29 16:06:55.073540] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:43.803 [2024-11-29 16:06:55.073545] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:43.803 [2024-11-29 16:06:55.073551] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:43.803 [2024-11-29 16:06:55.073556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.803 [2024-11-29 16:06:55.073562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:43.803 [2024-11-29 16:06:55.073567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:24:43.803 [2024-11-29 16:06:55.073572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.803 [2024-11-29 16:06:55.073618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.804 [2024-11-29 16:06:55.073624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:43.804 [2024-11-29 16:06:55.073629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:43.804 [2024-11-29 16:06:55.073635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.804 [2024-11-29 16:06:55.073687] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:43.804 [2024-11-29 16:06:55.073705] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:43.804 [2024-11-29 16:06:55.073711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.804 [2024-11-29 16:06:55.073717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.804 [2024-11-29 16:06:55.073722] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:43.804 [2024-11-29 16:06:55.073727] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:43.804 [2024-11-29 16:06:55.073732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:43.804 [2024-11-29 16:06:55.073738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:43.804 [2024-11-29 16:06:55.073743] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:43.804 [2024-11-29 16:06:55.073748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.804 [2024-11-29 16:06:55.073754] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:43.804 [2024-11-29 16:06:55.073759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:43.804 [2024-11-29 16:06:55.073764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.804 [2024-11-29 16:06:55.073769] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:43.804 [2024-11-29 16:06:55.073774] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:43.804 [2024-11-29 16:06:55.073779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.804 [2024-11-29 16:06:55.073788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:43.804 [2024-11-29 16:06:55.073793] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:43.804 [2024-11-29 16:06:55.073798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.804 [2024-11-29 16:06:55.073803] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:43.804 [2024-11-29 16:06:55.073807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:43.804 [2024-11-29 16:06:55.073812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:43.804 [2024-11-29 16:06:55.073817] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:43.804 [2024-11-29 16:06:55.073822] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:43.804 [2024-11-29 16:06:55.073827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:43.804 [2024-11-29 16:06:55.073832] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:43.804 [2024-11-29 16:06:55.073836] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:43.804 [2024-11-29 16:06:55.073841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:43.804 [2024-11-29 16:06:55.073846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:43.804 [2024-11-29 16:06:55.073851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:43.804 [2024-11-29 16:06:55.073855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:43.804 [2024-11-29 16:06:55.073860] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:43.804 [2024-11-29 16:06:55.073864] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:43.804 [2024-11-29 16:06:55.073869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:43.804 [2024-11-29 16:06:55.073874] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:43.804 [2024-11-29 16:06:55.073878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:43.804 [2024-11-29 16:06:55.073883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.804 [2024-11-29 16:06:55.073888] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:43.804 [2024-11-29 16:06:55.073892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:43.804 [2024-11-29 16:06:55.073897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.804 [2024-11-29 16:06:55.073902] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:43.804 [2024-11-29 16:06:55.073909] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:43.804 [2024-11-29 16:06:55.073916] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.804 [2024-11-29 16:06:55.073922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.804 [2024-11-29 16:06:55.073927] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:43.804 [2024-11-29 16:06:55.073932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:43.804 [2024-11-29 16:06:55.073937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:43.804 [2024-11-29 16:06:55.073942] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:43.804 [2024-11-29 16:06:55.073947] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:43.804 [2024-11-29 16:06:55.073952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:43.804 [2024-11-29 16:06:55.073957] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:43.804 [2024-11-29 16:06:55.073964] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.804 [2024-11-29 16:06:55.073985] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:43.804 [2024-11-29 16:06:55.073991] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:43.804 [2024-11-29 16:06:55.073996] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:43.804 [2024-11-29 16:06:55.074002] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:43.804 [2024-11-29 16:06:55.074008] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:43.804 [2024-11-29 16:06:55.074013] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:43.804 [2024-11-29 16:06:55.074019] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:43.804 [2024-11-29 16:06:55.074024] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:43.804 [2024-11-29 16:06:55.074029] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:43.804 [2024-11-29 16:06:55.074034] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:43.804 [2024-11-29 16:06:55.074040] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:43.804 [2024-11-29 16:06:55.074045] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:43.804 [2024-11-29 16:06:55.074051] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:43.804 [2024-11-29 16:06:55.074056] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:43.804 [2024-11-29 16:06:55.074061] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.804 [2024-11-29 16:06:55.074067] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:43.804 [2024-11-29 16:06:55.074072] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:43.804 [2024-11-29 16:06:55.074078] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:43.804 [2024-11-29 16:06:55.074084] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:43.804 [2024-11-29 16:06:55.074090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.804 [2024-11-29 16:06:55.074095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:43.804 [2024-11-29 16:06:55.074101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:24:43.804 [2024-11-29 16:06:55.074107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.804 [2024-11-29 16:06:55.085878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.804 [2024-11-29 16:06:55.085904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:43.804 [2024-11-29 16:06:55.085912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.742 ms 00:24:43.804 [2024-11-29 16:06:55.085920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.804 [2024-11-29 16:06:55.085996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.804 [2024-11-29 16:06:55.086003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:43.804 [2024-11-29 16:06:55.086009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:43.804 [2024-11-29 16:06:55.086015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.804 [2024-11-29 16:06:55.124782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.804 [2024-11-29 16:06:55.124900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:43.804 [2024-11-29 16:06:55.124916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.734 ms 00:24:43.804 [2024-11-29 16:06:55.124923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.804 [2024-11-29 16:06:55.124957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.804 [2024-11-29 16:06:55.124965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:43.804 [2024-11-29 16:06:55.124988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:43.804 [2024-11-29 16:06:55.124995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.804 [2024-11-29 16:06:55.125311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.804 [2024-11-29 16:06:55.125332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:43.804 [2024-11-29 16:06:55.125339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:24:43.804 [2024-11-29 16:06:55.125348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.804 [2024-11-29 16:06:55.125434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.805 [2024-11-29 16:06:55.125444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:43.805 [2024-11-29 16:06:55.125451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:43.805 [2024-11-29 16:06:55.125456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.805 [2024-11-29 16:06:55.136555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.805 [2024-11-29 16:06:55.136581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:43.805 [2024-11-29 16:06:55.136588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.083 ms 00:24:43.805 [2024-11-29 16:06:55.136594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.805 [2024-11-29 16:06:55.146514] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:43.805 [2024-11-29 16:06:55.146540] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:43.805 [2024-11-29 16:06:55.146548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.805 [2024-11-29 16:06:55.146554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:43.805 [2024-11-29 16:06:55.146561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.886 ms 00:24:43.805 [2024-11-29 16:06:55.146566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.805 [2024-11-29 16:06:55.168812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.805 [2024-11-29 16:06:55.168838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:43.805 [2024-11-29 16:06:55.168846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.217 ms 00:24:43.805 [2024-11-29 16:06:55.168853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.805 [2024-11-29 16:06:55.177656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.805 [2024-11-29 16:06:55.177681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:43.805 [2024-11-29 16:06:55.177688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.774 ms 00:24:43.805 [2024-11-29 16:06:55.177706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.805 [2024-11-29 16:06:55.186311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.805 [2024-11-29 16:06:55.186344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:43.805 [2024-11-29 16:06:55.186357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.580 ms 00:24:43.805 [2024-11-29 16:06:55.186362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.805 [2024-11-29 16:06:55.186622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.805 [2024-11-29 16:06:55.186630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:43.805 [2024-11-29 16:06:55.186637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:24:43.805 [2024-11-29 16:06:55.186643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.805 [2024-11-29 16:06:55.231309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.062 [2024-11-29 16:06:55.231428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:44.062 [2024-11-29 16:06:55.231444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.654 ms 00:24:44.062 [2024-11-29 16:06:55.231450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.062 [2024-11-29 16:06:55.239564] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:44.062 [2024-11-29 16:06:55.241331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.062 [2024-11-29 16:06:55.241355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:44.062 [2024-11-29 16:06:55.241363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.852 ms 00:24:44.062 [2024-11-29 16:06:55.241373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.062 [2024-11-29 16:06:55.241422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.062 [2024-11-29 16:06:55.241431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:44.062 [2024-11-29 16:06:55.241438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:44.062 [2024-11-29 16:06:55.241445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.062 [2024-11-29 16:06:55.242239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.062 [2024-11-29 16:06:55.242265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:44.062 [2024-11-29 16:06:55.242272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:24:44.063 [2024-11-29 16:06:55.242278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.063 [2024-11-29 16:06:55.243192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.063 [2024-11-29 16:06:55.243215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:44.063 [2024-11-29 16:06:55.243222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:24:44.063 [2024-11-29 16:06:55.243228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.063 [2024-11-29 16:06:55.243259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.063 [2024-11-29 16:06:55.243266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:44.063 [2024-11-29 16:06:55.243276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:44.063 [2024-11-29 16:06:55.243282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.063 [2024-11-29 16:06:55.243305] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:44.063 [2024-11-29 16:06:55.243312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.063 [2024-11-29 16:06:55.243320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:44.063 [2024-11-29 16:06:55.243326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:44.063 [2024-11-29 16:06:55.243331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.063 [2024-11-29 16:06:55.261248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.063 [2024-11-29 16:06:55.261342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:44.063 [2024-11-29 16:06:55.261356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.903 ms 00:24:44.063 [2024-11-29 16:06:55.261363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.063 [2024-11-29 16:06:55.261415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.063 [2024-11-29 16:06:55.261421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:44.063 [2024-11-29 16:06:55.261427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:44.063 [2024-11-29 16:06:55.261433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.063 [2024-11-29 16:06:55.266725] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 210.969 ms, result 0 00:24:45.001  [2024-11-29T16:06:57.813Z] Copying: 1148/1048576 [kB] (1148 kBps) [2024-11-29T16:06:58.756Z] Copying: 4784/1048576 [kB] (3636 kBps) [2024-11-29T16:06:59.699Z] Copying: 32/1024 [MB] (27 MBps) [2024-11-29T16:07:00.642Z] Copying: 55/1024 [MB] (23 MBps) [2024-11-29T16:07:01.586Z] Copying: 81/1024 [MB] (25 MBps) [2024-11-29T16:07:02.529Z] Copying: 106/1024 [MB] (24 MBps) [2024-11-29T16:07:03.472Z] Copying: 128/1024 [MB] (22 MBps) [2024-11-29T16:07:04.417Z] Copying: 149/1024 [MB] (21 MBps) [2024-11-29T16:07:05.803Z] Copying: 165/1024 [MB] (15 MBps) [2024-11-29T16:07:06.748Z] Copying: 190/1024 [MB] (25 MBps) [2024-11-29T16:07:07.692Z] Copying: 218/1024 [MB] (27 MBps) [2024-11-29T16:07:08.638Z] Copying: 242/1024 [MB] (24 MBps) [2024-11-29T16:07:09.581Z] Copying: 265/1024 [MB] (22 MBps) [2024-11-29T16:07:10.527Z] Copying: 286/1024 [MB] (21 MBps) [2024-11-29T16:07:11.471Z] Copying: 308/1024 [MB] (21 MBps) [2024-11-29T16:07:12.414Z] Copying: 326/1024 [MB] (18 MBps) [2024-11-29T16:07:13.795Z] Copying: 349/1024 [MB] (23 MBps) [2024-11-29T16:07:14.734Z] Copying: 370/1024 [MB] (20 MBps) [2024-11-29T16:07:15.687Z] Copying: 392/1024 [MB] (22 MBps) [2024-11-29T16:07:16.628Z] Copying: 415/1024 [MB] (23 MBps) [2024-11-29T16:07:17.571Z] Copying: 435/1024 [MB] (19 MBps) [2024-11-29T16:07:18.511Z] Copying: 452/1024 [MB] (16 MBps) [2024-11-29T16:07:19.451Z] Copying: 476/1024 [MB] (24 MBps) [2024-11-29T16:07:20.832Z] Copying: 494/1024 [MB] (17 MBps) [2024-11-29T16:07:21.442Z] Copying: 517/1024 [MB] (23 MBps) [2024-11-29T16:07:22.831Z] Copying: 542/1024 [MB] (24 MBps) [2024-11-29T16:07:23.406Z] Copying: 559/1024 [MB] (17 MBps) [2024-11-29T16:07:24.793Z] Copying: 576/1024 [MB] (16 MBps) [2024-11-29T16:07:25.739Z] Copying: 592/1024 [MB] (16 MBps) [2024-11-29T16:07:26.683Z] Copying: 611/1024 [MB] (18 MBps) [2024-11-29T16:07:27.630Z] Copying: 632/1024 [MB] (20 MBps) [2024-11-29T16:07:28.575Z] Copying: 649/1024 [MB] (17 MBps) [2024-11-29T16:07:29.520Z] Copying: 672/1024 [MB] (23 MBps) [2024-11-29T16:07:30.466Z] Copying: 697/1024 [MB] (24 MBps) [2024-11-29T16:07:31.411Z] Copying: 721/1024 [MB] (24 MBps) [2024-11-29T16:07:32.812Z] Copying: 738/1024 [MB] (16 MBps) [2024-11-29T16:07:33.752Z] Copying: 761/1024 [MB] (23 MBps) [2024-11-29T16:07:34.696Z] Copying: 782/1024 [MB] (20 MBps) [2024-11-29T16:07:35.655Z] Copying: 805/1024 [MB] (23 MBps) [2024-11-29T16:07:36.598Z] Copying: 828/1024 [MB] (22 MBps) [2024-11-29T16:07:37.545Z] Copying: 844/1024 [MB] (15 MBps) [2024-11-29T16:07:38.489Z] Copying: 863/1024 [MB] (19 MBps) [2024-11-29T16:07:39.434Z] Copying: 888/1024 [MB] (25 MBps) [2024-11-29T16:07:40.823Z] Copying: 913/1024 [MB] (25 MBps) [2024-11-29T16:07:41.769Z] Copying: 941/1024 [MB] (27 MBps) [2024-11-29T16:07:42.711Z] Copying: 962/1024 [MB] (20 MBps) [2024-11-29T16:07:43.653Z] Copying: 988/1024 [MB] (26 MBps) [2024-11-29T16:07:43.914Z] Copying: 1015/1024 [MB] (26 MBps) [2024-11-29T16:07:44.489Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-29 16:07:44.190233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.058 [2024-11-29 16:07:44.190392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:33.059 [2024-11-29 16:07:44.190410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:33.059 [2024-11-29 16:07:44.190420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.059 [2024-11-29 16:07:44.190447] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:33.059 [2024-11-29 16:07:44.194061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.059 [2024-11-29 16:07:44.194297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:33.059 [2024-11-29 16:07:44.194375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.594 ms 00:25:33.059 [2024-11-29 16:07:44.194402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.059 [2024-11-29 16:07:44.194753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.059 [2024-11-29 16:07:44.194861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:33.059 [2024-11-29 16:07:44.194925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:25:33.059 [2024-11-29 16:07:44.194936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.059 [2024-11-29 16:07:44.209789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.059 [2024-11-29 16:07:44.209842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:33.059 [2024-11-29 16:07:44.209855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.831 ms 00:25:33.059 [2024-11-29 16:07:44.209864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.059 [2024-11-29 16:07:44.216042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.059 [2024-11-29 16:07:44.216096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:33.059 [2024-11-29 16:07:44.216108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.136 ms 00:25:33.059 [2024-11-29 16:07:44.216118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.059 [2024-11-29 16:07:44.243022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.059 [2024-11-29 16:07:44.243071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:33.059 [2024-11-29 16:07:44.243084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.831 ms 00:25:33.059 [2024-11-29 16:07:44.243093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.059 [2024-11-29 16:07:44.258906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.059 [2024-11-29 16:07:44.258952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:33.059 [2024-11-29 16:07:44.258965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.766 ms 00:25:33.059 [2024-11-29 16:07:44.258984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.059 [2024-11-29 16:07:44.271260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.059 [2024-11-29 16:07:44.271373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:33.059 [2024-11-29 16:07:44.271410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.226 ms 00:25:33.059 [2024-11-29 16:07:44.271452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.059 [2024-11-29 16:07:44.299052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.059 [2024-11-29 16:07:44.299098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:33.059 [2024-11-29 16:07:44.299111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.557 ms 00:25:33.059 [2024-11-29 16:07:44.299118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.059 [2024-11-29 16:07:44.324576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.059 [2024-11-29 16:07:44.324622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:33.059 [2024-11-29 16:07:44.324633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.414 ms 00:25:33.059 [2024-11-29 16:07:44.324652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.059 [2024-11-29 16:07:44.349942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.059 [2024-11-29 16:07:44.349999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:33.059 [2024-11-29 16:07:44.350011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.245 ms 00:25:33.059 [2024-11-29 16:07:44.350019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.059 [2024-11-29 16:07:44.374678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.059 [2024-11-29 16:07:44.374723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:33.059 [2024-11-29 16:07:44.374735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.573 ms 00:25:33.059 [2024-11-29 16:07:44.374742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.059 [2024-11-29 16:07:44.374787] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:33.059 [2024-11-29 16:07:44.374802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:33.059 [2024-11-29 16:07:44.374814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3072 / 261120 wr_cnt: 1 state: open 00:25:33.059 [2024-11-29 16:07:44.374823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.374995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:33.059 [2024-11-29 16:07:44.375216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:33.060 [2024-11-29 16:07:44.375644] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:33.060 [2024-11-29 16:07:44.375654] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6de4d882-07fd-4d7b-8ead-4dd2e9318b0e 00:25:33.060 [2024-11-29 16:07:44.375664] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264192 00:25:33.060 [2024-11-29 16:07:44.375678] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 174784 00:25:33.060 [2024-11-29 16:07:44.375686] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 172800 00:25:33.060 [2024-11-29 16:07:44.375696] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0115 00:25:33.060 [2024-11-29 16:07:44.375705] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:33.060 [2024-11-29 16:07:44.375714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:33.060 [2024-11-29 16:07:44.375723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:33.060 [2024-11-29 16:07:44.375731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:33.060 [2024-11-29 16:07:44.375746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:33.060 [2024-11-29 16:07:44.375754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.060 [2024-11-29 16:07:44.375764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:33.060 [2024-11-29 16:07:44.375774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:25:33.060 [2024-11-29 16:07:44.375783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.060 [2024-11-29 16:07:44.389825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.060 [2024-11-29 16:07:44.389869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:33.060 [2024-11-29 16:07:44.389881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.007 ms 00:25:33.060 [2024-11-29 16:07:44.389888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.060 [2024-11-29 16:07:44.390134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.060 [2024-11-29 16:07:44.390150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:33.060 [2024-11-29 16:07:44.390159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:25:33.060 [2024-11-29 16:07:44.390174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.060 [2024-11-29 16:07:44.431233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.060 [2024-11-29 16:07:44.431280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:33.060 [2024-11-29 16:07:44.431293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.060 [2024-11-29 16:07:44.431302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.060 [2024-11-29 16:07:44.431357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.060 [2024-11-29 16:07:44.431366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:33.060 [2024-11-29 16:07:44.431374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.060 [2024-11-29 16:07:44.431382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.060 [2024-11-29 16:07:44.431458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.060 [2024-11-29 16:07:44.431470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:33.060 [2024-11-29 16:07:44.431478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.060 [2024-11-29 16:07:44.431488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.060 [2024-11-29 16:07:44.431506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.060 [2024-11-29 16:07:44.431517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:33.060 [2024-11-29 16:07:44.431525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.060 [2024-11-29 16:07:44.431533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.323 [2024-11-29 16:07:44.511315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.323 [2024-11-29 16:07:44.511365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:33.323 [2024-11-29 16:07:44.511378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.323 [2024-11-29 16:07:44.511387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.323 [2024-11-29 16:07:44.542722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.323 [2024-11-29 16:07:44.542768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:33.323 [2024-11-29 16:07:44.542779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.323 [2024-11-29 16:07:44.542788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.323 [2024-11-29 16:07:44.542858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.323 [2024-11-29 16:07:44.542869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:33.323 [2024-11-29 16:07:44.542879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.323 [2024-11-29 16:07:44.542887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.323 [2024-11-29 16:07:44.542929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.323 [2024-11-29 16:07:44.542939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:33.323 [2024-11-29 16:07:44.542950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.323 [2024-11-29 16:07:44.542958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.323 [2024-11-29 16:07:44.543081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.323 [2024-11-29 16:07:44.543100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:33.323 [2024-11-29 16:07:44.543109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.323 [2024-11-29 16:07:44.543118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.323 [2024-11-29 16:07:44.543153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.323 [2024-11-29 16:07:44.543163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:33.323 [2024-11-29 16:07:44.543171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.323 [2024-11-29 16:07:44.543179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.323 [2024-11-29 16:07:44.543221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.323 [2024-11-29 16:07:44.543234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:33.323 [2024-11-29 16:07:44.543243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.323 [2024-11-29 16:07:44.543251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.323 [2024-11-29 16:07:44.543300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.323 [2024-11-29 16:07:44.543313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:33.323 [2024-11-29 16:07:44.543322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.323 [2024-11-29 16:07:44.543331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.323 [2024-11-29 16:07:44.543464] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 353.199 ms, result 0 00:25:34.265 00:25:34.265 00:25:34.265 16:07:45 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:36.177 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:36.177 16:07:47 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:36.177 [2024-11-29 16:07:47.397637] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:36.177 [2024-11-29 16:07:47.397841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77865 ] 00:25:36.177 [2024-11-29 16:07:47.537158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.436 [2024-11-29 16:07:47.675898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.697 [2024-11-29 16:07:47.880079] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.697 [2024-11-29 16:07:47.880123] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.697 [2024-11-29 16:07:48.024895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.697 [2024-11-29 16:07:48.024926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:36.697 [2024-11-29 16:07:48.024936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:36.697 [2024-11-29 16:07:48.024944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.697 [2024-11-29 16:07:48.024991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.697 [2024-11-29 16:07:48.024999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:36.697 [2024-11-29 16:07:48.025006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:36.697 [2024-11-29 16:07:48.025011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.697 [2024-11-29 16:07:48.025023] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:36.697 [2024-11-29 16:07:48.025616] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:36.697 [2024-11-29 16:07:48.025632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.697 [2024-11-29 16:07:48.025638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:36.697 [2024-11-29 16:07:48.025645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:25:36.697 [2024-11-29 16:07:48.025650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.697 [2024-11-29 16:07:48.026642] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:36.697 [2024-11-29 16:07:48.036258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.697 [2024-11-29 16:07:48.036281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:36.697 [2024-11-29 16:07:48.036289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.617 ms 00:25:36.697 [2024-11-29 16:07:48.036295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.697 [2024-11-29 16:07:48.036335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.697 [2024-11-29 16:07:48.036342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:36.697 [2024-11-29 16:07:48.036349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:36.697 [2024-11-29 16:07:48.036354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.697 [2024-11-29 16:07:48.040560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.697 [2024-11-29 16:07:48.040580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:36.697 [2024-11-29 16:07:48.040587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.169 ms 00:25:36.697 [2024-11-29 16:07:48.040592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.697 [2024-11-29 16:07:48.040653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.697 [2024-11-29 16:07:48.040660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:36.697 [2024-11-29 16:07:48.040666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:36.697 [2024-11-29 16:07:48.040672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.697 [2024-11-29 16:07:48.040707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.697 [2024-11-29 16:07:48.040714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:36.697 [2024-11-29 16:07:48.040720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:36.697 [2024-11-29 16:07:48.040725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.697 [2024-11-29 16:07:48.040743] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:36.697 [2024-11-29 16:07:48.043449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.697 [2024-11-29 16:07:48.043469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:36.697 [2024-11-29 16:07:48.043476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.714 ms 00:25:36.697 [2024-11-29 16:07:48.043481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.697 [2024-11-29 16:07:48.043508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.697 [2024-11-29 16:07:48.043514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:36.697 [2024-11-29 16:07:48.043520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:36.697 [2024-11-29 16:07:48.043527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.697 [2024-11-29 16:07:48.043540] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:36.697 [2024-11-29 16:07:48.043555] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:36.697 [2024-11-29 16:07:48.043580] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:36.697 [2024-11-29 16:07:48.043592] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:36.697 [2024-11-29 16:07:48.043649] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:36.697 [2024-11-29 16:07:48.043656] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:36.697 [2024-11-29 16:07:48.043665] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:36.697 [2024-11-29 16:07:48.043673] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:36.697 [2024-11-29 16:07:48.043679] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:36.697 [2024-11-29 16:07:48.043685] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:36.697 [2024-11-29 16:07:48.043690] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:36.697 [2024-11-29 16:07:48.043696] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:36.697 [2024-11-29 16:07:48.043701] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:36.697 [2024-11-29 16:07:48.043707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.697 [2024-11-29 16:07:48.043712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:36.697 [2024-11-29 16:07:48.043718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:25:36.697 [2024-11-29 16:07:48.043723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.697 [2024-11-29 16:07:48.043770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.697 [2024-11-29 16:07:48.043780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:36.697 [2024-11-29 16:07:48.043786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:36.697 [2024-11-29 16:07:48.043792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.697 [2024-11-29 16:07:48.043844] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:36.697 [2024-11-29 16:07:48.043850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:36.697 [2024-11-29 16:07:48.043857] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.697 [2024-11-29 16:07:48.043862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.697 [2024-11-29 16:07:48.043869] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:36.697 [2024-11-29 16:07:48.043875] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:36.697 [2024-11-29 16:07:48.043880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:36.697 [2024-11-29 16:07:48.043886] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:36.697 [2024-11-29 16:07:48.043891] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:36.697 [2024-11-29 16:07:48.043896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.698 [2024-11-29 16:07:48.043901] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:36.698 [2024-11-29 16:07:48.043905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:36.698 [2024-11-29 16:07:48.043910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.698 [2024-11-29 16:07:48.043915] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:36.698 [2024-11-29 16:07:48.043920] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:36.698 [2024-11-29 16:07:48.043925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.698 [2024-11-29 16:07:48.043934] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:36.698 [2024-11-29 16:07:48.043939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:36.698 [2024-11-29 16:07:48.043944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.698 [2024-11-29 16:07:48.043948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:36.698 [2024-11-29 16:07:48.043954] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:36.698 [2024-11-29 16:07:48.043959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:36.698 [2024-11-29 16:07:48.043964] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:36.698 [2024-11-29 16:07:48.043978] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:36.698 [2024-11-29 16:07:48.043983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:36.698 [2024-11-29 16:07:48.043988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:36.698 [2024-11-29 16:07:48.043992] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:36.698 [2024-11-29 16:07:48.043997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:36.698 [2024-11-29 16:07:48.044001] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:36.698 [2024-11-29 16:07:48.044006] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:36.698 [2024-11-29 16:07:48.044011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:36.698 [2024-11-29 16:07:48.044016] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:36.698 [2024-11-29 16:07:48.044021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:36.698 [2024-11-29 16:07:48.044026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:36.698 [2024-11-29 16:07:48.044030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:36.698 [2024-11-29 16:07:48.044035] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:36.698 [2024-11-29 16:07:48.044040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.698 [2024-11-29 16:07:48.044045] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:36.698 [2024-11-29 16:07:48.044050] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:36.698 [2024-11-29 16:07:48.044055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.698 [2024-11-29 16:07:48.044060] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:36.698 [2024-11-29 16:07:48.044067] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:36.698 [2024-11-29 16:07:48.044072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.698 [2024-11-29 16:07:48.044077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.698 [2024-11-29 16:07:48.044082] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:36.698 [2024-11-29 16:07:48.044088] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:36.698 [2024-11-29 16:07:48.044092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:36.698 [2024-11-29 16:07:48.044097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:36.698 [2024-11-29 16:07:48.044102] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:36.698 [2024-11-29 16:07:48.044107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:36.698 [2024-11-29 16:07:48.044113] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:36.698 [2024-11-29 16:07:48.044119] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.698 [2024-11-29 16:07:48.044125] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:36.698 [2024-11-29 16:07:48.044131] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:36.698 [2024-11-29 16:07:48.044136] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:36.698 [2024-11-29 16:07:48.044141] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:36.698 [2024-11-29 16:07:48.044147] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:36.698 [2024-11-29 16:07:48.044152] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:36.698 [2024-11-29 16:07:48.044157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:36.698 [2024-11-29 16:07:48.044162] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:36.698 [2024-11-29 16:07:48.044167] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:36.698 [2024-11-29 16:07:48.044172] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:36.698 [2024-11-29 16:07:48.044177] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:36.698 [2024-11-29 16:07:48.044182] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:36.698 [2024-11-29 16:07:48.044188] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:36.698 [2024-11-29 16:07:48.044193] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:36.698 [2024-11-29 16:07:48.044199] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.698 [2024-11-29 16:07:48.044204] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:36.698 [2024-11-29 16:07:48.044210] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:36.698 [2024-11-29 16:07:48.044216] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:36.698 [2024-11-29 16:07:48.044221] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:36.698 [2024-11-29 16:07:48.044227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.698 [2024-11-29 16:07:48.044232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:36.698 [2024-11-29 16:07:48.044237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:25:36.698 [2024-11-29 16:07:48.044243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.698 [2024-11-29 16:07:48.056030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.698 [2024-11-29 16:07:48.056051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:36.698 [2024-11-29 16:07:48.056059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.759 ms 00:25:36.698 [2024-11-29 16:07:48.056067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.698 [2024-11-29 16:07:48.056129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.698 [2024-11-29 16:07:48.056135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:36.698 [2024-11-29 16:07:48.056141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:36.698 [2024-11-29 16:07:48.056146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.698 [2024-11-29 16:07:48.092984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.698 [2024-11-29 16:07:48.093013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:36.698 [2024-11-29 16:07:48.093023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.804 ms 00:25:36.698 [2024-11-29 16:07:48.093030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.698 [2024-11-29 16:07:48.093056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.698 [2024-11-29 16:07:48.093063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:36.698 [2024-11-29 16:07:48.093070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:36.698 [2024-11-29 16:07:48.093076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.698 [2024-11-29 16:07:48.093380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.698 [2024-11-29 16:07:48.093398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:36.698 [2024-11-29 16:07:48.093405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:25:36.698 [2024-11-29 16:07:48.093414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.698 [2024-11-29 16:07:48.093498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.698 [2024-11-29 16:07:48.093504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:36.698 [2024-11-29 16:07:48.093510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:36.698 [2024-11-29 16:07:48.093516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.698 [2024-11-29 16:07:48.104496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.698 [2024-11-29 16:07:48.104516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:36.698 [2024-11-29 16:07:48.104523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.965 ms 00:25:36.698 [2024-11-29 16:07:48.104528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.698 [2024-11-29 16:07:48.114314] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:36.698 [2024-11-29 16:07:48.114336] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:36.698 [2024-11-29 16:07:48.114343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.698 [2024-11-29 16:07:48.114349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:36.698 [2024-11-29 16:07:48.114356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.749 ms 00:25:36.698 [2024-11-29 16:07:48.114361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.960 [2024-11-29 16:07:48.132703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.960 [2024-11-29 16:07:48.132735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:36.960 [2024-11-29 16:07:48.132744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.315 ms 00:25:36.960 [2024-11-29 16:07:48.132750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.960 [2024-11-29 16:07:48.141603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.960 [2024-11-29 16:07:48.141625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:36.960 [2024-11-29 16:07:48.141632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.825 ms 00:25:36.960 [2024-11-29 16:07:48.141637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.960 [2024-11-29 16:07:48.150251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.960 [2024-11-29 16:07:48.150277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:36.960 [2024-11-29 16:07:48.150283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.590 ms 00:25:36.960 [2024-11-29 16:07:48.150288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.960 [2024-11-29 16:07:48.150550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.960 [2024-11-29 16:07:48.150563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:36.960 [2024-11-29 16:07:48.150570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:25:36.960 [2024-11-29 16:07:48.150575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.960 [2024-11-29 16:07:48.195178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.960 [2024-11-29 16:07:48.195204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:36.960 [2024-11-29 16:07:48.195213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.590 ms 00:25:36.960 [2024-11-29 16:07:48.195219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.960 [2024-11-29 16:07:48.203248] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:36.961 [2024-11-29 16:07:48.204842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.961 [2024-11-29 16:07:48.204861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:36.961 [2024-11-29 16:07:48.204869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.592 ms 00:25:36.961 [2024-11-29 16:07:48.204877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.961 [2024-11-29 16:07:48.204920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.961 [2024-11-29 16:07:48.204927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:36.961 [2024-11-29 16:07:48.204934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:36.961 [2024-11-29 16:07:48.204939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.961 [2024-11-29 16:07:48.205391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.961 [2024-11-29 16:07:48.205406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:36.961 [2024-11-29 16:07:48.205413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:25:36.961 [2024-11-29 16:07:48.205418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.961 [2024-11-29 16:07:48.206336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.961 [2024-11-29 16:07:48.206356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:36.961 [2024-11-29 16:07:48.206363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:25:36.961 [2024-11-29 16:07:48.206368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.961 [2024-11-29 16:07:48.206398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.961 [2024-11-29 16:07:48.206405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:36.961 [2024-11-29 16:07:48.206415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:36.961 [2024-11-29 16:07:48.206420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.961 [2024-11-29 16:07:48.206444] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:36.961 [2024-11-29 16:07:48.206451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.961 [2024-11-29 16:07:48.206459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:36.961 [2024-11-29 16:07:48.206464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:36.961 [2024-11-29 16:07:48.206469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.961 [2024-11-29 16:07:48.224453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.961 [2024-11-29 16:07:48.224476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:36.961 [2024-11-29 16:07:48.224485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.970 ms 00:25:36.961 [2024-11-29 16:07:48.224491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.961 [2024-11-29 16:07:48.224543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.961 [2024-11-29 16:07:48.224550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:36.961 [2024-11-29 16:07:48.224556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:36.961 [2024-11-29 16:07:48.224562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.961 [2024-11-29 16:07:48.225261] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 200.035 ms, result 0 00:25:38.347  [2024-11-29T16:07:50.722Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-29T16:07:51.662Z] Copying: 41/1024 [MB] (19 MBps) [2024-11-29T16:07:52.604Z] Copying: 65/1024 [MB] (24 MBps) [2024-11-29T16:07:53.542Z] Copying: 92/1024 [MB] (26 MBps) [2024-11-29T16:07:54.485Z] Copying: 116/1024 [MB] (23 MBps) [2024-11-29T16:07:55.452Z] Copying: 138/1024 [MB] (21 MBps) [2024-11-29T16:07:56.402Z] Copying: 159/1024 [MB] (20 MBps) [2024-11-29T16:07:57.788Z] Copying: 171/1024 [MB] (12 MBps) [2024-11-29T16:07:58.731Z] Copying: 194/1024 [MB] (23 MBps) [2024-11-29T16:07:59.672Z] Copying: 212/1024 [MB] (17 MBps) [2024-11-29T16:08:00.617Z] Copying: 232/1024 [MB] (20 MBps) [2024-11-29T16:08:01.562Z] Copying: 247/1024 [MB] (14 MBps) [2024-11-29T16:08:02.505Z] Copying: 260/1024 [MB] (13 MBps) [2024-11-29T16:08:03.450Z] Copying: 271/1024 [MB] (10 MBps) [2024-11-29T16:08:04.840Z] Copying: 284/1024 [MB] (12 MBps) [2024-11-29T16:08:05.412Z] Copying: 296/1024 [MB] (12 MBps) [2024-11-29T16:08:06.798Z] Copying: 306/1024 [MB] (10 MBps) [2024-11-29T16:08:07.741Z] Copying: 317/1024 [MB] (11 MBps) [2024-11-29T16:08:08.686Z] Copying: 333/1024 [MB] (15 MBps) [2024-11-29T16:08:09.629Z] Copying: 344/1024 [MB] (10 MBps) [2024-11-29T16:08:10.572Z] Copying: 355/1024 [MB] (10 MBps) [2024-11-29T16:08:11.514Z] Copying: 372/1024 [MB] (17 MBps) [2024-11-29T16:08:12.459Z] Copying: 387/1024 [MB] (14 MBps) [2024-11-29T16:08:13.406Z] Copying: 400/1024 [MB] (12 MBps) [2024-11-29T16:08:14.786Z] Copying: 410/1024 [MB] (10 MBps) [2024-11-29T16:08:15.730Z] Copying: 439/1024 [MB] (28 MBps) [2024-11-29T16:08:16.675Z] Copying: 452/1024 [MB] (12 MBps) [2024-11-29T16:08:17.620Z] Copying: 462/1024 [MB] (10 MBps) [2024-11-29T16:08:18.566Z] Copying: 480/1024 [MB] (17 MBps) [2024-11-29T16:08:19.511Z] Copying: 492/1024 [MB] (12 MBps) [2024-11-29T16:08:20.456Z] Copying: 504/1024 [MB] (12 MBps) [2024-11-29T16:08:21.400Z] Copying: 518/1024 [MB] (13 MBps) [2024-11-29T16:08:22.785Z] Copying: 532/1024 [MB] (14 MBps) [2024-11-29T16:08:23.728Z] Copying: 548/1024 [MB] (16 MBps) [2024-11-29T16:08:24.671Z] Copying: 563/1024 [MB] (14 MBps) [2024-11-29T16:08:25.613Z] Copying: 575/1024 [MB] (12 MBps) [2024-11-29T16:08:26.557Z] Copying: 591/1024 [MB] (15 MBps) [2024-11-29T16:08:27.495Z] Copying: 607/1024 [MB] (15 MBps) [2024-11-29T16:08:28.435Z] Copying: 626/1024 [MB] (19 MBps) [2024-11-29T16:08:29.814Z] Copying: 640/1024 [MB] (14 MBps) [2024-11-29T16:08:30.817Z] Copying: 668/1024 [MB] (27 MBps) [2024-11-29T16:08:31.761Z] Copying: 687/1024 [MB] (19 MBps) [2024-11-29T16:08:32.706Z] Copying: 701/1024 [MB] (13 MBps) [2024-11-29T16:08:33.650Z] Copying: 719/1024 [MB] (17 MBps) [2024-11-29T16:08:34.594Z] Copying: 737/1024 [MB] (18 MBps) [2024-11-29T16:08:35.539Z] Copying: 749/1024 [MB] (11 MBps) [2024-11-29T16:08:36.494Z] Copying: 764/1024 [MB] (15 MBps) [2024-11-29T16:08:37.472Z] Copying: 779/1024 [MB] (15 MBps) [2024-11-29T16:08:38.412Z] Copying: 790/1024 [MB] (10 MBps) [2024-11-29T16:08:39.799Z] Copying: 805/1024 [MB] (15 MBps) [2024-11-29T16:08:40.742Z] Copying: 821/1024 [MB] (15 MBps) [2024-11-29T16:08:41.686Z] Copying: 832/1024 [MB] (10 MBps) [2024-11-29T16:08:42.631Z] Copying: 852/1024 [MB] (19 MBps) [2024-11-29T16:08:43.575Z] Copying: 867/1024 [MB] (15 MBps) [2024-11-29T16:08:44.519Z] Copying: 888/1024 [MB] (21 MBps) [2024-11-29T16:08:45.464Z] Copying: 905/1024 [MB] (16 MBps) [2024-11-29T16:08:46.406Z] Copying: 916/1024 [MB] (10 MBps) [2024-11-29T16:08:47.796Z] Copying: 926/1024 [MB] (10 MBps) [2024-11-29T16:08:48.742Z] Copying: 938/1024 [MB] (11 MBps) [2024-11-29T16:08:49.687Z] Copying: 948/1024 [MB] (10 MBps) [2024-11-29T16:08:50.633Z] Copying: 959/1024 [MB] (10 MBps) [2024-11-29T16:08:51.577Z] Copying: 970/1024 [MB] (10 MBps) [2024-11-29T16:08:52.524Z] Copying: 987/1024 [MB] (17 MBps) [2024-11-29T16:08:53.470Z] Copying: 1003/1024 [MB] (16 MBps) [2024-11-29T16:08:54.415Z] Copying: 1014/1024 [MB] (10 MBps) [2024-11-29T16:08:54.415Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-29 16:08:54.310542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.984 [2024-11-29 16:08:54.310666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:42.984 [2024-11-29 16:08:54.310700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:42.984 [2024-11-29 16:08:54.310721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.984 [2024-11-29 16:08:54.310773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:42.984 [2024-11-29 16:08:54.318720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.984 [2024-11-29 16:08:54.318784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:42.984 [2024-11-29 16:08:54.318800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.912 ms 00:26:42.984 [2024-11-29 16:08:54.318812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.984 [2024-11-29 16:08:54.319215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.984 [2024-11-29 16:08:54.319235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:42.984 [2024-11-29 16:08:54.319249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:26:42.984 [2024-11-29 16:08:54.319260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.984 [2024-11-29 16:08:54.324860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.984 [2024-11-29 16:08:54.324893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:42.984 [2024-11-29 16:08:54.324914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.579 ms 00:26:42.984 [2024-11-29 16:08:54.324928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.984 [2024-11-29 16:08:54.331579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.984 [2024-11-29 16:08:54.331622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:42.984 [2024-11-29 16:08:54.331634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.626 ms 00:26:42.984 [2024-11-29 16:08:54.331641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.984 [2024-11-29 16:08:54.358808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.984 [2024-11-29 16:08:54.358855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:42.984 [2024-11-29 16:08:54.358867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.096 ms 00:26:42.984 [2024-11-29 16:08:54.358875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.984 [2024-11-29 16:08:54.374584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.984 [2024-11-29 16:08:54.374627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:42.984 [2024-11-29 16:08:54.374639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.662 ms 00:26:42.984 [2024-11-29 16:08:54.374653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.984 [2024-11-29 16:08:54.383602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.984 [2024-11-29 16:08:54.383648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:42.984 [2024-11-29 16:08:54.383659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.898 ms 00:26:42.984 [2024-11-29 16:08:54.383667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.984 [2024-11-29 16:08:54.409690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.984 [2024-11-29 16:08:54.409755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:42.984 [2024-11-29 16:08:54.409766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.006 ms 00:26:42.984 [2024-11-29 16:08:54.409773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.247 [2024-11-29 16:08:54.435535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.247 [2024-11-29 16:08:54.435577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:43.247 [2024-11-29 16:08:54.435600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.719 ms 00:26:43.247 [2024-11-29 16:08:54.435608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.247 [2024-11-29 16:08:54.460480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.247 [2024-11-29 16:08:54.460521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:43.247 [2024-11-29 16:08:54.460532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.827 ms 00:26:43.247 [2024-11-29 16:08:54.460540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.247 [2024-11-29 16:08:54.485325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.247 [2024-11-29 16:08:54.485367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:43.247 [2024-11-29 16:08:54.485378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.699 ms 00:26:43.247 [2024-11-29 16:08:54.485386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.247 [2024-11-29 16:08:54.485428] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:43.247 [2024-11-29 16:08:54.485451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:43.247 [2024-11-29 16:08:54.485461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3072 / 261120 wr_cnt: 1 state: open 00:26:43.247 [2024-11-29 16:08:54.485470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:43.247 [2024-11-29 16:08:54.485478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:43.247 [2024-11-29 16:08:54.485486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.485992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:43.248 [2024-11-29 16:08:54.486227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:43.249 [2024-11-29 16:08:54.486235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:43.249 [2024-11-29 16:08:54.486242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:43.249 [2024-11-29 16:08:54.486251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:43.249 [2024-11-29 16:08:54.486259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:43.249 [2024-11-29 16:08:54.486268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:43.249 [2024-11-29 16:08:54.486276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:43.249 [2024-11-29 16:08:54.486293] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:43.249 [2024-11-29 16:08:54.486301] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6de4d882-07fd-4d7b-8ead-4dd2e9318b0e 00:26:43.249 [2024-11-29 16:08:54.486311] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264192 00:26:43.249 [2024-11-29 16:08:54.486318] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:43.249 [2024-11-29 16:08:54.486327] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:43.249 [2024-11-29 16:08:54.486336] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:43.249 [2024-11-29 16:08:54.486344] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:43.249 [2024-11-29 16:08:54.486353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:43.249 [2024-11-29 16:08:54.486362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:43.249 [2024-11-29 16:08:54.486376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:43.249 [2024-11-29 16:08:54.486383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:43.249 [2024-11-29 16:08:54.486390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.249 [2024-11-29 16:08:54.486398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:43.249 [2024-11-29 16:08:54.486409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:26:43.249 [2024-11-29 16:08:54.486416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.499745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.249 [2024-11-29 16:08:54.499784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:43.249 [2024-11-29 16:08:54.499796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.295 ms 00:26:43.249 [2024-11-29 16:08:54.499806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.500076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.249 [2024-11-29 16:08:54.500094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:43.249 [2024-11-29 16:08:54.500103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:26:43.249 [2024-11-29 16:08:54.500111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.539013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.249 [2024-11-29 16:08:54.539057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:43.249 [2024-11-29 16:08:54.539069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.249 [2024-11-29 16:08:54.539078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.539152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.249 [2024-11-29 16:08:54.539161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:43.249 [2024-11-29 16:08:54.539169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.249 [2024-11-29 16:08:54.539178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.539251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.249 [2024-11-29 16:08:54.539263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:43.249 [2024-11-29 16:08:54.539272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.249 [2024-11-29 16:08:54.539280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.539298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.249 [2024-11-29 16:08:54.539311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:43.249 [2024-11-29 16:08:54.539320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.249 [2024-11-29 16:08:54.539327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.619288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.249 [2024-11-29 16:08:54.619340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:43.249 [2024-11-29 16:08:54.619353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.249 [2024-11-29 16:08:54.619361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.650958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.249 [2024-11-29 16:08:54.651018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:43.249 [2024-11-29 16:08:54.651030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.249 [2024-11-29 16:08:54.651039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.651105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.249 [2024-11-29 16:08:54.651114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:43.249 [2024-11-29 16:08:54.651122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.249 [2024-11-29 16:08:54.651130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.651171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.249 [2024-11-29 16:08:54.651181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:43.249 [2024-11-29 16:08:54.651194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.249 [2024-11-29 16:08:54.651202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.651306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.249 [2024-11-29 16:08:54.651318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:43.249 [2024-11-29 16:08:54.651327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.249 [2024-11-29 16:08:54.651335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.651365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.249 [2024-11-29 16:08:54.651375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:43.249 [2024-11-29 16:08:54.651384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.249 [2024-11-29 16:08:54.651395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.651435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.249 [2024-11-29 16:08:54.651444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:43.249 [2024-11-29 16:08:54.651454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.249 [2024-11-29 16:08:54.651464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.651509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.249 [2024-11-29 16:08:54.651519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:43.249 [2024-11-29 16:08:54.651530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.249 [2024-11-29 16:08:54.651538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.249 [2024-11-29 16:08:54.651668] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.123 ms, result 0 00:26:44.194 00:26:44.194 00:26:44.194 16:08:55 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:46.744 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:46.744 16:08:57 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:46.744 16:08:57 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:46.744 16:08:57 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:46.744 16:08:57 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:46.744 16:08:57 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:46.744 16:08:57 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:46.744 16:08:57 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:46.744 16:08:58 -- ftl/dirty_shutdown.sh@37 -- # killprocess 75877 00:26:46.744 16:08:58 -- common/autotest_common.sh@936 -- # '[' -z 75877 ']' 00:26:46.744 16:08:58 -- common/autotest_common.sh@940 -- # kill -0 75877 00:26:46.744 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (75877) - No such process 00:26:46.744 Process with pid 75877 is not found 00:26:46.744 16:08:58 -- common/autotest_common.sh@963 -- # echo 'Process with pid 75877 is not found' 00:26:46.744 16:08:58 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:47.005 16:08:58 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:47.005 Remove shared memory files 00:26:47.005 16:08:58 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:47.005 16:08:58 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:47.005 16:08:58 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:47.005 16:08:58 -- ftl/common.sh@207 -- # rm -f rm -f 00:26:47.005 16:08:58 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:47.005 16:08:58 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:47.005 00:26:47.005 real 4m16.693s 00:26:47.005 user 4m29.146s 00:26:47.005 sys 0m24.290s 00:26:47.005 16:08:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:47.005 16:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:47.005 ************************************ 00:26:47.005 END TEST ftl_dirty_shutdown 00:26:47.005 ************************************ 00:26:47.005 16:08:58 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:47.005 16:08:58 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:47.005 16:08:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:47.005 16:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:47.005 ************************************ 00:26:47.005 START TEST ftl_upgrade_shutdown 00:26:47.005 ************************************ 00:26:47.005 16:08:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:47.266 * Looking for test storage... 00:26:47.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:47.266 16:08:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:47.266 16:08:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:47.266 16:08:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:47.266 16:08:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:47.266 16:08:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:47.266 16:08:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:47.266 16:08:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:47.266 16:08:58 -- scripts/common.sh@335 -- # IFS=.-: 00:26:47.266 16:08:58 -- scripts/common.sh@335 -- # read -ra ver1 00:26:47.266 16:08:58 -- scripts/common.sh@336 -- # IFS=.-: 00:26:47.266 16:08:58 -- scripts/common.sh@336 -- # read -ra ver2 00:26:47.266 16:08:58 -- scripts/common.sh@337 -- # local 'op=<' 00:26:47.266 16:08:58 -- scripts/common.sh@339 -- # ver1_l=2 00:26:47.266 16:08:58 -- scripts/common.sh@340 -- # ver2_l=1 00:26:47.266 16:08:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:47.266 16:08:58 -- scripts/common.sh@343 -- # case "$op" in 00:26:47.266 16:08:58 -- scripts/common.sh@344 -- # : 1 00:26:47.266 16:08:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:47.266 16:08:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:47.266 16:08:58 -- scripts/common.sh@364 -- # decimal 1 00:26:47.266 16:08:58 -- scripts/common.sh@352 -- # local d=1 00:26:47.266 16:08:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:47.266 16:08:58 -- scripts/common.sh@354 -- # echo 1 00:26:47.266 16:08:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:47.266 16:08:58 -- scripts/common.sh@365 -- # decimal 2 00:26:47.266 16:08:58 -- scripts/common.sh@352 -- # local d=2 00:26:47.266 16:08:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:47.266 16:08:58 -- scripts/common.sh@354 -- # echo 2 00:26:47.266 16:08:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:47.266 16:08:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:47.266 16:08:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:47.266 16:08:58 -- scripts/common.sh@367 -- # return 0 00:26:47.266 16:08:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:47.266 16:08:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:47.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.266 --rc genhtml_branch_coverage=1 00:26:47.266 --rc genhtml_function_coverage=1 00:26:47.266 --rc genhtml_legend=1 00:26:47.266 --rc geninfo_all_blocks=1 00:26:47.266 --rc geninfo_unexecuted_blocks=1 00:26:47.266 00:26:47.266 ' 00:26:47.266 16:08:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:47.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.266 --rc genhtml_branch_coverage=1 00:26:47.266 --rc genhtml_function_coverage=1 00:26:47.266 --rc genhtml_legend=1 00:26:47.266 --rc geninfo_all_blocks=1 00:26:47.266 --rc geninfo_unexecuted_blocks=1 00:26:47.266 00:26:47.266 ' 00:26:47.266 16:08:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:47.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.266 --rc genhtml_branch_coverage=1 00:26:47.266 --rc genhtml_function_coverage=1 00:26:47.266 --rc genhtml_legend=1 00:26:47.266 --rc geninfo_all_blocks=1 00:26:47.266 --rc geninfo_unexecuted_blocks=1 00:26:47.266 00:26:47.266 ' 00:26:47.266 16:08:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:47.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.266 --rc genhtml_branch_coverage=1 00:26:47.266 --rc genhtml_function_coverage=1 00:26:47.266 --rc genhtml_legend=1 00:26:47.266 --rc geninfo_all_blocks=1 00:26:47.266 --rc geninfo_unexecuted_blocks=1 00:26:47.266 00:26:47.266 ' 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:47.266 16:08:58 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:47.266 16:08:58 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:47.266 16:08:58 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:47.266 16:08:58 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:47.266 16:08:58 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:47.266 16:08:58 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:47.266 16:08:58 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:47.266 16:08:58 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:47.266 16:08:58 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:47.266 16:08:58 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:47.266 16:08:58 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:47.266 16:08:58 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:47.266 16:08:58 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:47.266 16:08:58 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:47.266 16:08:58 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:47.266 16:08:58 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:47.266 16:08:58 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:47.266 16:08:58 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:47.266 16:08:58 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:47.266 16:08:58 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:47.266 16:08:58 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:47.266 16:08:58 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:47.266 16:08:58 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:47.266 16:08:58 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:47.266 16:08:58 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:47.266 16:08:58 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:47.266 16:08:58 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:47.266 16:08:58 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:47.266 16:08:58 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:47.266 16:08:58 -- ftl/common.sh@81 -- # local base_bdev= 00:26:47.266 16:08:58 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:47.266 16:08:58 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:47.266 16:08:58 -- ftl/common.sh@89 -- # spdk_tgt_pid=78661 00:26:47.266 16:08:58 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:47.266 16:08:58 -- ftl/common.sh@91 -- # waitforlisten 78661 00:26:47.266 16:08:58 -- common/autotest_common.sh@829 -- # '[' -z 78661 ']' 00:26:47.266 16:08:58 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:47.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.266 16:08:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.266 16:08:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.266 16:08:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.266 16:08:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.266 16:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:47.266 [2024-11-29 16:08:58.650223] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:47.266 [2024-11-29 16:08:58.650362] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78661 ] 00:26:47.528 [2024-11-29 16:08:58.805527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.789 [2024-11-29 16:08:59.025627] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:47.789 [2024-11-29 16:08:59.025882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.177 16:09:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:49.177 16:09:00 -- common/autotest_common.sh@862 -- # return 0 00:26:49.177 16:09:00 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:49.177 16:09:00 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:49.177 16:09:00 -- ftl/common.sh@99 -- # local params 00:26:49.178 16:09:00 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:49.178 16:09:00 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:49.178 16:09:00 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:49.178 16:09:00 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:26:49.178 16:09:00 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:49.178 16:09:00 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:49.178 16:09:00 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:49.178 16:09:00 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:26:49.178 16:09:00 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:49.178 16:09:00 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:49.178 16:09:00 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:49.178 16:09:00 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:49.178 16:09:00 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:26:49.178 16:09:00 -- ftl/common.sh@54 -- # local name=base 00:26:49.178 16:09:00 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:26:49.178 16:09:00 -- ftl/common.sh@56 -- # local size=20480 00:26:49.178 16:09:00 -- ftl/common.sh@59 -- # local base_bdev 00:26:49.178 16:09:00 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:26:49.178 16:09:00 -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:49.178 16:09:00 -- ftl/common.sh@62 -- # local base_size 00:26:49.178 16:09:00 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:49.178 16:09:00 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:26:49.178 16:09:00 -- common/autotest_common.sh@1368 -- # local bdev_info 00:26:49.178 16:09:00 -- common/autotest_common.sh@1369 -- # local bs 00:26:49.178 16:09:00 -- common/autotest_common.sh@1370 -- # local nb 00:26:49.178 16:09:00 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:49.439 16:09:00 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:26:49.439 { 00:26:49.439 "name": "basen1", 00:26:49.439 "aliases": [ 00:26:49.439 "49bebcf3-4d60-43de-9692-0f2eebda18d7" 00:26:49.439 ], 00:26:49.439 "product_name": "NVMe disk", 00:26:49.439 "block_size": 4096, 00:26:49.439 "num_blocks": 1310720, 00:26:49.439 "uuid": "49bebcf3-4d60-43de-9692-0f2eebda18d7", 00:26:49.439 "assigned_rate_limits": { 00:26:49.439 "rw_ios_per_sec": 0, 00:26:49.439 "rw_mbytes_per_sec": 0, 00:26:49.439 "r_mbytes_per_sec": 0, 00:26:49.439 "w_mbytes_per_sec": 0 00:26:49.439 }, 00:26:49.439 "claimed": true, 00:26:49.439 "claim_type": "read_many_write_one", 00:26:49.439 "zoned": false, 00:26:49.439 "supported_io_types": { 00:26:49.439 "read": true, 00:26:49.439 "write": true, 00:26:49.439 "unmap": true, 00:26:49.439 "write_zeroes": true, 00:26:49.439 "flush": true, 00:26:49.439 "reset": true, 00:26:49.439 "compare": true, 00:26:49.439 "compare_and_write": false, 00:26:49.439 "abort": true, 00:26:49.439 "nvme_admin": true, 00:26:49.439 "nvme_io": true 00:26:49.439 }, 00:26:49.439 "driver_specific": { 00:26:49.439 "nvme": [ 00:26:49.439 { 00:26:49.439 "pci_address": "0000:00:07.0", 00:26:49.439 "trid": { 00:26:49.439 "trtype": "PCIe", 00:26:49.439 "traddr": "0000:00:07.0" 00:26:49.439 }, 00:26:49.439 "ctrlr_data": { 00:26:49.439 "cntlid": 0, 00:26:49.439 "vendor_id": "0x1b36", 00:26:49.439 "model_number": "QEMU NVMe Ctrl", 00:26:49.439 "serial_number": "12341", 00:26:49.439 "firmware_revision": "8.0.0", 00:26:49.439 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:49.439 "oacs": { 00:26:49.439 "security": 0, 00:26:49.439 "format": 1, 00:26:49.439 "firmware": 0, 00:26:49.439 "ns_manage": 1 00:26:49.439 }, 00:26:49.439 "multi_ctrlr": false, 00:26:49.439 "ana_reporting": false 00:26:49.439 }, 00:26:49.439 "vs": { 00:26:49.439 "nvme_version": "1.4" 00:26:49.439 }, 00:26:49.439 "ns_data": { 00:26:49.439 "id": 1, 00:26:49.439 "can_share": false 00:26:49.439 } 00:26:49.439 } 00:26:49.439 ], 00:26:49.439 "mp_policy": "active_passive" 00:26:49.439 } 00:26:49.439 } 00:26:49.439 ]' 00:26:49.439 16:09:00 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:26:49.439 16:09:00 -- common/autotest_common.sh@1372 -- # bs=4096 00:26:49.439 16:09:00 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:26:49.439 16:09:00 -- common/autotest_common.sh@1373 -- # nb=1310720 00:26:49.439 16:09:00 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:26:49.439 16:09:00 -- common/autotest_common.sh@1377 -- # echo 5120 00:26:49.439 16:09:00 -- ftl/common.sh@63 -- # base_size=5120 00:26:49.439 16:09:00 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:49.439 16:09:00 -- ftl/common.sh@67 -- # clear_lvols 00:26:49.439 16:09:00 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:49.439 16:09:00 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:49.702 16:09:00 -- ftl/common.sh@28 -- # stores=e0eee463-6703-4c6d-81fc-2ce48274cdbd 00:26:49.702 16:09:00 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:49.702 16:09:00 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e0eee463-6703-4c6d-81fc-2ce48274cdbd 00:26:49.962 16:09:01 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:49.962 16:09:01 -- ftl/common.sh@68 -- # lvs=08ef6962-0d12-4d0f-850b-5bec12dcf7b2 00:26:49.962 16:09:01 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 08ef6962-0d12-4d0f-850b-5bec12dcf7b2 00:26:50.223 16:09:01 -- ftl/common.sh@107 -- # base_bdev=f22ebc45-397e-4425-a2bf-526214c91867 00:26:50.223 16:09:01 -- ftl/common.sh@108 -- # [[ -z f22ebc45-397e-4425-a2bf-526214c91867 ]] 00:26:50.223 16:09:01 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 f22ebc45-397e-4425-a2bf-526214c91867 5120 00:26:50.223 16:09:01 -- ftl/common.sh@35 -- # local name=cache 00:26:50.223 16:09:01 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:26:50.223 16:09:01 -- ftl/common.sh@37 -- # local base_bdev=f22ebc45-397e-4425-a2bf-526214c91867 00:26:50.223 16:09:01 -- ftl/common.sh@38 -- # local cache_size=5120 00:26:50.223 16:09:01 -- ftl/common.sh@41 -- # get_bdev_size f22ebc45-397e-4425-a2bf-526214c91867 00:26:50.223 16:09:01 -- common/autotest_common.sh@1367 -- # local bdev_name=f22ebc45-397e-4425-a2bf-526214c91867 00:26:50.223 16:09:01 -- common/autotest_common.sh@1368 -- # local bdev_info 00:26:50.223 16:09:01 -- common/autotest_common.sh@1369 -- # local bs 00:26:50.223 16:09:01 -- common/autotest_common.sh@1370 -- # local nb 00:26:50.223 16:09:01 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f22ebc45-397e-4425-a2bf-526214c91867 00:26:50.482 16:09:01 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:26:50.482 { 00:26:50.482 "name": "f22ebc45-397e-4425-a2bf-526214c91867", 00:26:50.482 "aliases": [ 00:26:50.482 "lvs/basen1p0" 00:26:50.482 ], 00:26:50.482 "product_name": "Logical Volume", 00:26:50.482 "block_size": 4096, 00:26:50.482 "num_blocks": 5242880, 00:26:50.482 "uuid": "f22ebc45-397e-4425-a2bf-526214c91867", 00:26:50.482 "assigned_rate_limits": { 00:26:50.482 "rw_ios_per_sec": 0, 00:26:50.482 "rw_mbytes_per_sec": 0, 00:26:50.482 "r_mbytes_per_sec": 0, 00:26:50.482 "w_mbytes_per_sec": 0 00:26:50.482 }, 00:26:50.482 "claimed": false, 00:26:50.482 "zoned": false, 00:26:50.482 "supported_io_types": { 00:26:50.482 "read": true, 00:26:50.482 "write": true, 00:26:50.482 "unmap": true, 00:26:50.482 "write_zeroes": true, 00:26:50.482 "flush": false, 00:26:50.482 "reset": true, 00:26:50.482 "compare": false, 00:26:50.482 "compare_and_write": false, 00:26:50.482 "abort": false, 00:26:50.482 "nvme_admin": false, 00:26:50.482 "nvme_io": false 00:26:50.482 }, 00:26:50.482 "driver_specific": { 00:26:50.482 "lvol": { 00:26:50.482 "lvol_store_uuid": "08ef6962-0d12-4d0f-850b-5bec12dcf7b2", 00:26:50.482 "base_bdev": "basen1", 00:26:50.482 "thin_provision": true, 00:26:50.482 "snapshot": false, 00:26:50.482 "clone": false, 00:26:50.482 "esnap_clone": false 00:26:50.482 } 00:26:50.482 } 00:26:50.482 } 00:26:50.482 ]' 00:26:50.482 16:09:01 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:26:50.482 16:09:01 -- common/autotest_common.sh@1372 -- # bs=4096 00:26:50.482 16:09:01 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:26:50.482 16:09:01 -- common/autotest_common.sh@1373 -- # nb=5242880 00:26:50.482 16:09:01 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:26:50.482 16:09:01 -- common/autotest_common.sh@1377 -- # echo 20480 00:26:50.482 16:09:01 -- ftl/common.sh@41 -- # local base_size=1024 00:26:50.482 16:09:01 -- ftl/common.sh@44 -- # local nvc_bdev 00:26:50.482 16:09:01 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:26:50.741 16:09:02 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:50.741 16:09:02 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:50.741 16:09:02 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:51.000 16:09:02 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:51.000 16:09:02 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:51.000 16:09:02 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d f22ebc45-397e-4425-a2bf-526214c91867 -c cachen1p0 --l2p_dram_limit 2 00:26:51.000 [2024-11-29 16:09:02.416632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.000 [2024-11-29 16:09:02.416674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:51.000 [2024-11-29 16:09:02.416686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:51.000 [2024-11-29 16:09:02.416694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.000 [2024-11-29 16:09:02.416734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.000 [2024-11-29 16:09:02.416741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:51.000 [2024-11-29 16:09:02.416749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:51.000 [2024-11-29 16:09:02.416754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.000 [2024-11-29 16:09:02.416770] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:51.000 [2024-11-29 16:09:02.417348] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:51.000 [2024-11-29 16:09:02.417370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.000 [2024-11-29 16:09:02.417376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:51.000 [2024-11-29 16:09:02.417386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.601 ms 00:26:51.000 [2024-11-29 16:09:02.417391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.000 [2024-11-29 16:09:02.417440] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID c04498c1-bf25-4145-bf54-17bb86bfc638 00:26:51.000 [2024-11-29 16:09:02.418399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.001 [2024-11-29 16:09:02.418423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:51.001 [2024-11-29 16:09:02.418431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:51.001 [2024-11-29 16:09:02.418438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.001 [2024-11-29 16:09:02.423095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.001 [2024-11-29 16:09:02.423123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:51.001 [2024-11-29 16:09:02.423131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.625 ms 00:26:51.001 [2024-11-29 16:09:02.423138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.001 [2024-11-29 16:09:02.423166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.001 [2024-11-29 16:09:02.423175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:51.001 [2024-11-29 16:09:02.423181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:51.001 [2024-11-29 16:09:02.423211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.001 [2024-11-29 16:09:02.423243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.001 [2024-11-29 16:09:02.423254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:51.001 [2024-11-29 16:09:02.423260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:51.001 [2024-11-29 16:09:02.423267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.001 [2024-11-29 16:09:02.423285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:51.001 [2024-11-29 16:09:02.426201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.001 [2024-11-29 16:09:02.426228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:51.001 [2024-11-29 16:09:02.426237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.919 ms 00:26:51.001 [2024-11-29 16:09:02.426243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.001 [2024-11-29 16:09:02.426265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.001 [2024-11-29 16:09:02.426272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:51.001 [2024-11-29 16:09:02.426279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:51.001 [2024-11-29 16:09:02.426285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.001 [2024-11-29 16:09:02.426298] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:51.001 [2024-11-29 16:09:02.426384] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:51.001 [2024-11-29 16:09:02.426400] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:51.001 [2024-11-29 16:09:02.426408] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:51.001 [2024-11-29 16:09:02.426417] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:51.001 [2024-11-29 16:09:02.426424] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:51.001 [2024-11-29 16:09:02.426433] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:51.001 [2024-11-29 16:09:02.426438] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:51.001 [2024-11-29 16:09:02.426446] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:51.001 [2024-11-29 16:09:02.426452] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:51.001 [2024-11-29 16:09:02.426459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.001 [2024-11-29 16:09:02.426471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:51.001 [2024-11-29 16:09:02.426478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.162 ms 00:26:51.001 [2024-11-29 16:09:02.426483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.001 [2024-11-29 16:09:02.426532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.001 [2024-11-29 16:09:02.426543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:51.001 [2024-11-29 16:09:02.426550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:51.001 [2024-11-29 16:09:02.426557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.001 [2024-11-29 16:09:02.426615] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:51.001 [2024-11-29 16:09:02.426622] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:51.001 [2024-11-29 16:09:02.426629] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:51.001 [2024-11-29 16:09:02.426635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.001 [2024-11-29 16:09:02.426643] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:51.001 [2024-11-29 16:09:02.426648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:51.001 [2024-11-29 16:09:02.426654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:51.001 [2024-11-29 16:09:02.426659] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:51.001 [2024-11-29 16:09:02.426665] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:51.001 [2024-11-29 16:09:02.426670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.001 [2024-11-29 16:09:02.426677] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:51.001 [2024-11-29 16:09:02.426682] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:51.001 [2024-11-29 16:09:02.426689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.001 [2024-11-29 16:09:02.426694] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:51.001 [2024-11-29 16:09:02.426701] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:51.001 [2024-11-29 16:09:02.426706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.001 [2024-11-29 16:09:02.426714] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:51.001 [2024-11-29 16:09:02.426719] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:51.001 [2024-11-29 16:09:02.426725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.001 [2024-11-29 16:09:02.426730] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:51.001 [2024-11-29 16:09:02.426737] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:51.001 [2024-11-29 16:09:02.426741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:51.001 [2024-11-29 16:09:02.426748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:51.001 [2024-11-29 16:09:02.426753] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:51.001 [2024-11-29 16:09:02.426759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:51.001 [2024-11-29 16:09:02.426764] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:51.001 [2024-11-29 16:09:02.426770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:51.001 [2024-11-29 16:09:02.426775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:51.001 [2024-11-29 16:09:02.426781] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:51.001 [2024-11-29 16:09:02.426786] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:51.001 [2024-11-29 16:09:02.426792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:51.001 [2024-11-29 16:09:02.426797] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:51.001 [2024-11-29 16:09:02.426805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:51.001 [2024-11-29 16:09:02.426809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:51.001 [2024-11-29 16:09:02.426816] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:51.001 [2024-11-29 16:09:02.426822] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:51.001 [2024-11-29 16:09:02.426828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.001 [2024-11-29 16:09:02.426832] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:51.001 [2024-11-29 16:09:02.426839] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:51.001 [2024-11-29 16:09:02.426844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.001 [2024-11-29 16:09:02.426850] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:51.001 [2024-11-29 16:09:02.426856] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:51.001 [2024-11-29 16:09:02.426863] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:51.001 [2024-11-29 16:09:02.426868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.001 [2024-11-29 16:09:02.426876] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:51.001 [2024-11-29 16:09:02.426882] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:51.001 [2024-11-29 16:09:02.426889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:51.001 [2024-11-29 16:09:02.426894] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:51.001 [2024-11-29 16:09:02.426902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:51.001 [2024-11-29 16:09:02.426907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:51.001 [2024-11-29 16:09:02.426914] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:51.001 [2024-11-29 16:09:02.426922] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:51.001 [2024-11-29 16:09:02.426930] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:51.001 [2024-11-29 16:09:02.426935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:51.001 [2024-11-29 16:09:02.426942] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:51.001 [2024-11-29 16:09:02.426948] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:51.001 [2024-11-29 16:09:02.426955] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:51.001 [2024-11-29 16:09:02.426961] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:51.001 [2024-11-29 16:09:02.426967] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:51.001 [2024-11-29 16:09:02.426982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:51.001 [2024-11-29 16:09:02.426989] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:51.002 [2024-11-29 16:09:02.426994] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:51.002 [2024-11-29 16:09:02.427001] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:51.002 [2024-11-29 16:09:02.427007] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:51.002 [2024-11-29 16:09:02.427017] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:51.002 [2024-11-29 16:09:02.427022] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:51.002 [2024-11-29 16:09:02.427030] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:51.002 [2024-11-29 16:09:02.427036] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:51.002 [2024-11-29 16:09:02.427043] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:51.002 [2024-11-29 16:09:02.427049] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:51.002 [2024-11-29 16:09:02.427055] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:51.002 [2024-11-29 16:09:02.427061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.002 [2024-11-29 16:09:02.427068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:51.002 [2024-11-29 16:09:02.427073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.482 ms 00:26:51.002 [2024-11-29 16:09:02.427080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.260 [2024-11-29 16:09:02.438863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.260 [2024-11-29 16:09:02.438894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:51.260 [2024-11-29 16:09:02.438902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.753 ms 00:26:51.260 [2024-11-29 16:09:02.438909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.260 [2024-11-29 16:09:02.438940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.260 [2024-11-29 16:09:02.438950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:51.260 [2024-11-29 16:09:02.438956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:51.260 [2024-11-29 16:09:02.438962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.260 [2024-11-29 16:09:02.462728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.260 [2024-11-29 16:09:02.462758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:51.260 [2024-11-29 16:09:02.462766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.723 ms 00:26:51.260 [2024-11-29 16:09:02.462776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.260 [2024-11-29 16:09:02.462797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.260 [2024-11-29 16:09:02.462804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:51.260 [2024-11-29 16:09:02.462811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:51.260 [2024-11-29 16:09:02.462821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.260 [2024-11-29 16:09:02.463143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.260 [2024-11-29 16:09:02.463224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:51.260 [2024-11-29 16:09:02.463231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:26:51.260 [2024-11-29 16:09:02.463238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.260 [2024-11-29 16:09:02.463271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.260 [2024-11-29 16:09:02.463281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:51.260 [2024-11-29 16:09:02.463286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:51.260 [2024-11-29 16:09:02.463293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.260 [2024-11-29 16:09:02.475207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.260 [2024-11-29 16:09:02.475234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:51.260 [2024-11-29 16:09:02.475241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.899 ms 00:26:51.260 [2024-11-29 16:09:02.475250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.260 [2024-11-29 16:09:02.484225] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:51.261 [2024-11-29 16:09:02.484930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.261 [2024-11-29 16:09:02.484953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:51.261 [2024-11-29 16:09:02.484961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.625 ms 00:26:51.261 [2024-11-29 16:09:02.484967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.261 [2024-11-29 16:09:02.506811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.261 [2024-11-29 16:09:02.506842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:51.261 [2024-11-29 16:09:02.506853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.813 ms 00:26:51.261 [2024-11-29 16:09:02.506859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.261 [2024-11-29 16:09:02.506892] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:26:51.261 [2024-11-29 16:09:02.506900] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:26:55.552 [2024-11-29 16:09:06.131664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.552 [2024-11-29 16:09:06.131759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:55.552 [2024-11-29 16:09:06.131784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3624.743 ms 00:26:55.552 [2024-11-29 16:09:06.131794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.552 [2024-11-29 16:09:06.131927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.552 [2024-11-29 16:09:06.131943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:55.552 [2024-11-29 16:09:06.131956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:26:55.552 [2024-11-29 16:09:06.131965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.552 [2024-11-29 16:09:06.157940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.552 [2024-11-29 16:09:06.158011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:55.552 [2024-11-29 16:09:06.158029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.901 ms 00:26:55.552 [2024-11-29 16:09:06.158038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.552 [2024-11-29 16:09:06.183182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.552 [2024-11-29 16:09:06.183236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:55.552 [2024-11-29 16:09:06.183255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.080 ms 00:26:55.552 [2024-11-29 16:09:06.183263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.552 [2024-11-29 16:09:06.183615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.552 [2024-11-29 16:09:06.183627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:55.552 [2024-11-29 16:09:06.183638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:26:55.552 [2024-11-29 16:09:06.183649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.552 [2024-11-29 16:09:06.256354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.552 [2024-11-29 16:09:06.256408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:55.552 [2024-11-29 16:09:06.256425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 72.660 ms 00:26:55.552 [2024-11-29 16:09:06.256433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.552 [2024-11-29 16:09:06.283762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.552 [2024-11-29 16:09:06.283812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:55.552 [2024-11-29 16:09:06.283828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.270 ms 00:26:55.552 [2024-11-29 16:09:06.283837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.552 [2024-11-29 16:09:06.285603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.552 [2024-11-29 16:09:06.285647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:55.552 [2024-11-29 16:09:06.285665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.696 ms 00:26:55.552 [2024-11-29 16:09:06.285673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.552 [2024-11-29 16:09:06.311936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.552 [2024-11-29 16:09:06.312000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:55.552 [2024-11-29 16:09:06.312016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.195 ms 00:26:55.552 [2024-11-29 16:09:06.312024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.552 [2024-11-29 16:09:06.312082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.552 [2024-11-29 16:09:06.312093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:55.552 [2024-11-29 16:09:06.312104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:55.552 [2024-11-29 16:09:06.312112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.552 [2024-11-29 16:09:06.312211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.552 [2024-11-29 16:09:06.312223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:55.552 [2024-11-29 16:09:06.312233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:26:55.552 [2024-11-29 16:09:06.312241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.552 [2024-11-29 16:09:06.313416] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3896.259 ms, result 0 00:26:55.552 { 00:26:55.552 "name": "ftl", 00:26:55.552 "uuid": "c04498c1-bf25-4145-bf54-17bb86bfc638" 00:26:55.552 } 00:26:55.552 16:09:06 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:55.553 [2024-11-29 16:09:06.528559] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.553 16:09:06 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:55.553 16:09:06 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:55.553 [2024-11-29 16:09:06.929035] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:55.553 16:09:06 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:55.814 [2024-11-29 16:09:07.138257] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:55.814 16:09:07 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:56.076 16:09:07 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:56.076 16:09:07 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:56.076 16:09:07 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:56.076 16:09:07 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:56.076 16:09:07 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:56.076 16:09:07 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:56.076 16:09:07 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:56.076 16:09:07 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:56.076 16:09:07 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:56.076 16:09:07 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:56.076 Fill FTL, iteration 1 00:26:56.076 16:09:07 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:56.076 16:09:07 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:56.076 16:09:07 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:56.076 16:09:07 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:56.076 16:09:07 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:56.076 16:09:07 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:56.076 16:09:07 -- ftl/common.sh@163 -- # spdk_ini_pid=78787 00:26:56.076 16:09:07 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:56.076 16:09:07 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:56.076 16:09:07 -- ftl/common.sh@165 -- # waitforlisten 78787 /var/tmp/spdk.tgt.sock 00:26:56.076 16:09:07 -- common/autotest_common.sh@829 -- # '[' -z 78787 ']' 00:26:56.076 16:09:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:56.076 16:09:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:56.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:56.076 16:09:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:56.076 16:09:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:56.076 16:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:56.338 [2024-11-29 16:09:07.536355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:56.338 [2024-11-29 16:09:07.536496] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78787 ] 00:26:56.338 [2024-11-29 16:09:07.683562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.600 [2024-11-29 16:09:07.902341] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:56.600 [2024-11-29 16:09:07.902567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.977 16:09:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.977 16:09:09 -- common/autotest_common.sh@862 -- # return 0 00:26:57.977 16:09:09 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:57.977 ftln1 00:26:57.977 16:09:09 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:57.977 16:09:09 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:58.236 16:09:09 -- ftl/common.sh@173 -- # echo ']}' 00:26:58.236 16:09:09 -- ftl/common.sh@176 -- # killprocess 78787 00:26:58.236 16:09:09 -- common/autotest_common.sh@936 -- # '[' -z 78787 ']' 00:26:58.236 16:09:09 -- common/autotest_common.sh@940 -- # kill -0 78787 00:26:58.236 16:09:09 -- common/autotest_common.sh@941 -- # uname 00:26:58.236 16:09:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:58.236 16:09:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78787 00:26:58.236 16:09:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:58.236 16:09:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:58.236 killing process with pid 78787 00:26:58.236 16:09:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78787' 00:26:58.236 16:09:09 -- common/autotest_common.sh@955 -- # kill 78787 00:26:58.236 16:09:09 -- common/autotest_common.sh@960 -- # wait 78787 00:26:59.611 16:09:10 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:59.611 16:09:10 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:59.611 [2024-11-29 16:09:10.822221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:59.611 [2024-11-29 16:09:10.822329] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78842 ] 00:26:59.611 [2024-11-29 16:09:10.968416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.868 [2024-11-29 16:09:11.110960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.244  [2024-11-29T16:09:13.611Z] Copying: 276/1024 [MB] (276 MBps) [2024-11-29T16:09:14.546Z] Copying: 538/1024 [MB] (262 MBps) [2024-11-29T16:09:15.480Z] Copying: 799/1024 [MB] (261 MBps) [2024-11-29T16:09:16.048Z] Copying: 1024/1024 [MB] (average 265 MBps) 00:27:04.617 00:27:04.617 16:09:15 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:04.617 Calculate MD5 checksum, iteration 1 00:27:04.617 16:09:15 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:04.617 16:09:15 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:04.617 16:09:15 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:04.617 16:09:15 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:04.617 16:09:15 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:04.617 16:09:15 -- ftl/common.sh@154 -- # return 0 00:27:04.617 16:09:15 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:04.617 [2024-11-29 16:09:15.958900] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:04.617 [2024-11-29 16:09:15.959037] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78902 ] 00:27:04.877 [2024-11-29 16:09:16.107426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.877 [2024-11-29 16:09:16.242623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.258  [2024-11-29T16:09:18.260Z] Copying: 680/1024 [MB] (680 MBps) [2024-11-29T16:09:18.831Z] Copying: 1024/1024 [MB] (average 671 MBps) 00:27:07.400 00:27:07.400 16:09:18 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:07.400 16:09:18 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:09.942 16:09:20 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:09.942 Fill FTL, iteration 2 00:27:09.942 16:09:20 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=764352b4387f108a2d12d57d519c06c0 00:27:09.942 16:09:20 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:09.942 16:09:20 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:09.942 16:09:20 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:09.942 16:09:20 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:09.942 16:09:20 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:09.942 16:09:20 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:09.942 16:09:20 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:09.942 16:09:20 -- ftl/common.sh@154 -- # return 0 00:27:09.942 16:09:20 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:09.942 [2024-11-29 16:09:20.818714] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:09.942 [2024-11-29 16:09:20.818811] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78952 ] 00:27:09.943 [2024-11-29 16:09:20.964533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.943 [2024-11-29 16:09:21.122425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.327  [2024-11-29T16:09:23.702Z] Copying: 249/1024 [MB] (249 MBps) [2024-11-29T16:09:24.645Z] Copying: 486/1024 [MB] (237 MBps) [2024-11-29T16:09:25.585Z] Copying: 710/1024 [MB] (224 MBps) [2024-11-29T16:09:26.156Z] Copying: 927/1024 [MB] (217 MBps) [2024-11-29T16:09:26.726Z] Copying: 1024/1024 [MB] (average 231 MBps) 00:27:15.295 00:27:15.295 Calculate MD5 checksum, iteration 2 00:27:15.295 16:09:26 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:15.295 16:09:26 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:15.295 16:09:26 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:15.295 16:09:26 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:15.295 16:09:26 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:15.295 16:09:26 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:15.295 16:09:26 -- ftl/common.sh@154 -- # return 0 00:27:15.295 16:09:26 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:15.295 [2024-11-29 16:09:26.619433] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:15.295 [2024-11-29 16:09:26.619549] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79017 ] 00:27:15.554 [2024-11-29 16:09:26.768051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.554 [2024-11-29 16:09:26.934168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.942  [2024-11-29T16:09:29.314Z] Copying: 642/1024 [MB] (642 MBps) [2024-11-29T16:09:30.257Z] Copying: 1024/1024 [MB] (average 629 MBps) 00:27:18.826 00:27:18.826 16:09:29 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:18.826 16:09:29 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:20.734 16:09:32 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:20.734 16:09:32 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=390cdfa3d7c658202967a4f828919810 00:27:20.734 16:09:32 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:20.734 16:09:32 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:20.734 16:09:32 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:20.994 [2024-11-29 16:09:32.304166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.994 [2024-11-29 16:09:32.304232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:20.994 [2024-11-29 16:09:32.304248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:20.994 [2024-11-29 16:09:32.304261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.994 [2024-11-29 16:09:32.304290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.994 [2024-11-29 16:09:32.304299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:20.994 [2024-11-29 16:09:32.304308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:20.994 [2024-11-29 16:09:32.304316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.994 [2024-11-29 16:09:32.304363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.994 [2024-11-29 16:09:32.304372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:20.994 [2024-11-29 16:09:32.304390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:20.994 [2024-11-29 16:09:32.304398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.994 [2024-11-29 16:09:32.304475] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.299 ms, result 0 00:27:20.994 true 00:27:20.994 16:09:32 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:21.254 { 00:27:21.254 "name": "ftl", 00:27:21.254 "properties": [ 00:27:21.254 { 00:27:21.254 "name": "superblock_version", 00:27:21.254 "value": 5, 00:27:21.254 "read-only": true 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "name": "base_device", 00:27:21.254 "bands": [ 00:27:21.254 { 00:27:21.254 "id": 0, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 1, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 2, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 3, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 4, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 5, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 6, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 7, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 8, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 9, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 10, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 11, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 12, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 13, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 14, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.254 }, 00:27:21.254 { 00:27:21.254 "id": 15, 00:27:21.254 "state": "FREE", 00:27:21.254 "validity": 0.0 00:27:21.255 }, 00:27:21.255 { 00:27:21.255 "id": 16, 00:27:21.255 "state": "FREE", 00:27:21.255 "validity": 0.0 00:27:21.255 }, 00:27:21.255 { 00:27:21.255 "id": 17, 00:27:21.255 "state": "FREE", 00:27:21.255 "validity": 0.0 00:27:21.255 } 00:27:21.255 ], 00:27:21.255 "read-only": true 00:27:21.255 }, 00:27:21.255 { 00:27:21.255 "name": "cache_device", 00:27:21.255 "type": "bdev", 00:27:21.255 "chunks": [ 00:27:21.255 { 00:27:21.255 "id": 0, 00:27:21.255 "state": "CLOSED", 00:27:21.255 "utilization": 1.0 00:27:21.255 }, 00:27:21.255 { 00:27:21.255 "id": 1, 00:27:21.255 "state": "CLOSED", 00:27:21.255 "utilization": 1.0 00:27:21.255 }, 00:27:21.255 { 00:27:21.255 "id": 2, 00:27:21.255 "state": "OPEN", 00:27:21.255 "utilization": 0.001953125 00:27:21.255 }, 00:27:21.255 { 00:27:21.255 "id": 3, 00:27:21.255 "state": "OPEN", 00:27:21.255 "utilization": 0.0 00:27:21.255 } 00:27:21.255 ], 00:27:21.255 "read-only": true 00:27:21.255 }, 00:27:21.255 { 00:27:21.255 "name": "verbose_mode", 00:27:21.255 "value": true, 00:27:21.255 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:21.255 }, 00:27:21.255 { 00:27:21.255 "name": "prep_upgrade_on_shutdown", 00:27:21.255 "value": false, 00:27:21.255 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:21.255 } 00:27:21.255 ] 00:27:21.255 } 00:27:21.255 16:09:32 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:21.516 [2024-11-29 16:09:32.720596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.516 [2024-11-29 16:09:32.720640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:21.516 [2024-11-29 16:09:32.720652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:21.516 [2024-11-29 16:09:32.720659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.516 [2024-11-29 16:09:32.720681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.516 [2024-11-29 16:09:32.720688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:21.516 [2024-11-29 16:09:32.720696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:21.516 [2024-11-29 16:09:32.720703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.516 [2024-11-29 16:09:32.720723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.516 [2024-11-29 16:09:32.720730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:21.516 [2024-11-29 16:09:32.720738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:21.516 [2024-11-29 16:09:32.720744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.516 [2024-11-29 16:09:32.720798] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.194 ms, result 0 00:27:21.516 true 00:27:21.516 16:09:32 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:21.516 16:09:32 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:21.516 16:09:32 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:21.516 16:09:32 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:21.516 16:09:32 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:21.516 16:09:32 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:21.775 [2024-11-29 16:09:33.125024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.775 [2024-11-29 16:09:33.125083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:21.775 [2024-11-29 16:09:33.125097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:21.775 [2024-11-29 16:09:33.125105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.775 [2024-11-29 16:09:33.125129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.775 [2024-11-29 16:09:33.125138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:21.775 [2024-11-29 16:09:33.125147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:21.775 [2024-11-29 16:09:33.125155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.775 [2024-11-29 16:09:33.125175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.775 [2024-11-29 16:09:33.125183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:21.775 [2024-11-29 16:09:33.125191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:21.775 [2024-11-29 16:09:33.125198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.775 [2024-11-29 16:09:33.125259] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.224 ms, result 0 00:27:21.775 true 00:27:21.775 16:09:33 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:22.033 { 00:27:22.033 "name": "ftl", 00:27:22.033 "properties": [ 00:27:22.033 { 00:27:22.033 "name": "superblock_version", 00:27:22.033 "value": 5, 00:27:22.033 "read-only": true 00:27:22.033 }, 00:27:22.033 { 00:27:22.033 "name": "base_device", 00:27:22.034 "bands": [ 00:27:22.034 { 00:27:22.034 "id": 0, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 1, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 2, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 3, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 4, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 5, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 6, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 7, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 8, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 9, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 10, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 11, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 12, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 13, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 14, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 15, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 16, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 17, 00:27:22.034 "state": "FREE", 00:27:22.034 "validity": 0.0 00:27:22.034 } 00:27:22.034 ], 00:27:22.034 "read-only": true 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "name": "cache_device", 00:27:22.034 "type": "bdev", 00:27:22.034 "chunks": [ 00:27:22.034 { 00:27:22.034 "id": 0, 00:27:22.034 "state": "CLOSED", 00:27:22.034 "utilization": 1.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 1, 00:27:22.034 "state": "CLOSED", 00:27:22.034 "utilization": 1.0 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 2, 00:27:22.034 "state": "OPEN", 00:27:22.034 "utilization": 0.001953125 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "id": 3, 00:27:22.034 "state": "OPEN", 00:27:22.034 "utilization": 0.0 00:27:22.034 } 00:27:22.034 ], 00:27:22.034 "read-only": true 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "name": "verbose_mode", 00:27:22.034 "value": true, 00:27:22.034 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:22.034 }, 00:27:22.034 { 00:27:22.034 "name": "prep_upgrade_on_shutdown", 00:27:22.034 "value": true, 00:27:22.034 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:22.034 } 00:27:22.034 ] 00:27:22.034 } 00:27:22.034 16:09:33 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:22.034 16:09:33 -- ftl/common.sh@130 -- # [[ -n 78661 ]] 00:27:22.034 16:09:33 -- ftl/common.sh@131 -- # killprocess 78661 00:27:22.034 16:09:33 -- common/autotest_common.sh@936 -- # '[' -z 78661 ']' 00:27:22.034 16:09:33 -- common/autotest_common.sh@940 -- # kill -0 78661 00:27:22.034 16:09:33 -- common/autotest_common.sh@941 -- # uname 00:27:22.034 16:09:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:22.034 16:09:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78661 00:27:22.034 killing process with pid 78661 00:27:22.034 16:09:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:22.034 16:09:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:22.034 16:09:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78661' 00:27:22.034 16:09:33 -- common/autotest_common.sh@955 -- # kill 78661 00:27:22.034 16:09:33 -- common/autotest_common.sh@960 -- # wait 78661 00:27:22.601 [2024-11-29 16:09:33.865585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:22.601 [2024-11-29 16:09:33.877247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.601 [2024-11-29 16:09:33.877280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:22.601 [2024-11-29 16:09:33.877290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:22.601 [2024-11-29 16:09:33.877296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.601 [2024-11-29 16:09:33.877312] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:22.601 [2024-11-29 16:09:33.879327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.601 [2024-11-29 16:09:33.879357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:22.601 [2024-11-29 16:09:33.879365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.003 ms 00:27:22.601 [2024-11-29 16:09:33.879372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.723 [2024-11-29 16:09:42.050264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.723 [2024-11-29 16:09:42.050320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:30.723 [2024-11-29 16:09:42.050333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8170.845 ms 00:27:30.723 [2024-11-29 16:09:42.050343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.723 [2024-11-29 16:09:42.051387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.723 [2024-11-29 16:09:42.051406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:30.723 [2024-11-29 16:09:42.051414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.031 ms 00:27:30.723 [2024-11-29 16:09:42.051420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.723 [2024-11-29 16:09:42.052277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.723 [2024-11-29 16:09:42.052291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:30.723 [2024-11-29 16:09:42.052298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.838 ms 00:27:30.723 [2024-11-29 16:09:42.052304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.723 [2024-11-29 16:09:42.060022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.723 [2024-11-29 16:09:42.060051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:30.723 [2024-11-29 16:09:42.060059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.687 ms 00:27:30.723 [2024-11-29 16:09:42.060064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.723 [2024-11-29 16:09:42.065057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.723 [2024-11-29 16:09:42.065084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:30.723 [2024-11-29 16:09:42.065092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.968 ms 00:27:30.723 [2024-11-29 16:09:42.065099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.723 [2024-11-29 16:09:42.065154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.723 [2024-11-29 16:09:42.065161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:30.723 [2024-11-29 16:09:42.065171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:30.723 [2024-11-29 16:09:42.065176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.723 [2024-11-29 16:09:42.072246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.723 [2024-11-29 16:09:42.072271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:30.723 [2024-11-29 16:09:42.072278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.059 ms 00:27:30.723 [2024-11-29 16:09:42.072283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.723 [2024-11-29 16:09:42.079622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.723 [2024-11-29 16:09:42.079646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:30.723 [2024-11-29 16:09:42.079652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.316 ms 00:27:30.723 [2024-11-29 16:09:42.079657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.723 [2024-11-29 16:09:42.086736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.723 [2024-11-29 16:09:42.086761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:30.723 [2024-11-29 16:09:42.086767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.055 ms 00:27:30.723 [2024-11-29 16:09:42.086772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.724 [2024-11-29 16:09:42.094000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.724 [2024-11-29 16:09:42.094024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:30.724 [2024-11-29 16:09:42.094030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.176 ms 00:27:30.724 [2024-11-29 16:09:42.094036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.724 [2024-11-29 16:09:42.094058] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:30.724 [2024-11-29 16:09:42.094069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:30.724 [2024-11-29 16:09:42.094077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:30.724 [2024-11-29 16:09:42.094083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:30.724 [2024-11-29 16:09:42.094088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:30.724 [2024-11-29 16:09:42.094180] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:30.724 [2024-11-29 16:09:42.094185] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c04498c1-bf25-4145-bf54-17bb86bfc638 00:27:30.724 [2024-11-29 16:09:42.094191] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:30.724 [2024-11-29 16:09:42.094197] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:30.724 [2024-11-29 16:09:42.094202] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:30.724 [2024-11-29 16:09:42.094208] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:30.724 [2024-11-29 16:09:42.094214] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:30.724 [2024-11-29 16:09:42.094221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:30.724 [2024-11-29 16:09:42.094227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:30.724 [2024-11-29 16:09:42.094231] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:30.724 [2024-11-29 16:09:42.094236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:30.724 [2024-11-29 16:09:42.094241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.724 [2024-11-29 16:09:42.094247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:30.724 [2024-11-29 16:09:42.094257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:27:30.724 [2024-11-29 16:09:42.094262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.724 [2024-11-29 16:09:42.103645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.724 [2024-11-29 16:09:42.103670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:30.724 [2024-11-29 16:09:42.103678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.370 ms 00:27:30.724 [2024-11-29 16:09:42.103687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.724 [2024-11-29 16:09:42.103833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.724 [2024-11-29 16:09:42.103841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:30.724 [2024-11-29 16:09:42.103847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.132 ms 00:27:30.724 [2024-11-29 16:09:42.103852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.724 [2024-11-29 16:09:42.139067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.724 [2024-11-29 16:09:42.139094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:30.724 [2024-11-29 16:09:42.139105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.724 [2024-11-29 16:09:42.139113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.724 [2024-11-29 16:09:42.139134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.724 [2024-11-29 16:09:42.139140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:30.724 [2024-11-29 16:09:42.139146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.724 [2024-11-29 16:09:42.139151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.724 [2024-11-29 16:09:42.139193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.724 [2024-11-29 16:09:42.139200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:30.724 [2024-11-29 16:09:42.139207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.724 [2024-11-29 16:09:42.139212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.724 [2024-11-29 16:09:42.139226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.724 [2024-11-29 16:09:42.139233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:30.724 [2024-11-29 16:09:42.139238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.724 [2024-11-29 16:09:42.139243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.985 [2024-11-29 16:09:42.197302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.985 [2024-11-29 16:09:42.197334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:30.985 [2024-11-29 16:09:42.197342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.985 [2024-11-29 16:09:42.197352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.985 [2024-11-29 16:09:42.219391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.985 [2024-11-29 16:09:42.219420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:30.985 [2024-11-29 16:09:42.219427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.985 [2024-11-29 16:09:42.219434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.985 [2024-11-29 16:09:42.219476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.985 [2024-11-29 16:09:42.219483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:30.985 [2024-11-29 16:09:42.219490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.985 [2024-11-29 16:09:42.219496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.985 [2024-11-29 16:09:42.219527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.985 [2024-11-29 16:09:42.219535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:30.985 [2024-11-29 16:09:42.219541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.985 [2024-11-29 16:09:42.219547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.985 [2024-11-29 16:09:42.219610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.985 [2024-11-29 16:09:42.219617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:30.985 [2024-11-29 16:09:42.219623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.985 [2024-11-29 16:09:42.219629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.985 [2024-11-29 16:09:42.219650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.985 [2024-11-29 16:09:42.219658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:30.985 [2024-11-29 16:09:42.219664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.985 [2024-11-29 16:09:42.219670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.985 [2024-11-29 16:09:42.219696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.985 [2024-11-29 16:09:42.219703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:30.985 [2024-11-29 16:09:42.219709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.985 [2024-11-29 16:09:42.219714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.985 [2024-11-29 16:09:42.219749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:30.985 [2024-11-29 16:09:42.219757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:30.985 [2024-11-29 16:09:42.219762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:30.985 [2024-11-29 16:09:42.219768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.985 [2024-11-29 16:09:42.219858] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8342.562 ms, result 0 00:27:35.214 16:09:46 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:35.214 16:09:46 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:35.214 16:09:46 -- ftl/common.sh@81 -- # local base_bdev= 00:27:35.214 16:09:46 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:35.214 16:09:46 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:35.214 16:09:46 -- ftl/common.sh@89 -- # spdk_tgt_pid=79235 00:27:35.214 16:09:46 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:35.214 16:09:46 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:35.214 16:09:46 -- ftl/common.sh@91 -- # waitforlisten 79235 00:27:35.214 16:09:46 -- common/autotest_common.sh@829 -- # '[' -z 79235 ']' 00:27:35.214 16:09:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.214 16:09:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:35.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.214 16:09:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.214 16:09:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:35.214 16:09:46 -- common/autotest_common.sh@10 -- # set +x 00:27:35.214 [2024-11-29 16:09:46.588600] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:35.214 [2024-11-29 16:09:46.588724] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79235 ] 00:27:35.497 [2024-11-29 16:09:46.738479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.813 [2024-11-29 16:09:46.950912] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:35.813 [2024-11-29 16:09:46.951157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.392 [2024-11-29 16:09:47.643670] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:36.392 [2024-11-29 16:09:47.643756] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:36.392 [2024-11-29 16:09:47.789917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.392 [2024-11-29 16:09:47.789959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:36.392 [2024-11-29 16:09:47.789982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:36.392 [2024-11-29 16:09:47.789990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.392 [2024-11-29 16:09:47.790044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.392 [2024-11-29 16:09:47.790056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:36.392 [2024-11-29 16:09:47.790064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:36.392 [2024-11-29 16:09:47.790071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.392 [2024-11-29 16:09:47.790092] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:36.392 [2024-11-29 16:09:47.790804] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:36.392 [2024-11-29 16:09:47.790832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.392 [2024-11-29 16:09:47.790839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:36.392 [2024-11-29 16:09:47.790847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.743 ms 00:27:36.392 [2024-11-29 16:09:47.790854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.392 [2024-11-29 16:09:47.791953] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:36.392 [2024-11-29 16:09:47.804720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.392 [2024-11-29 16:09:47.804755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:36.392 [2024-11-29 16:09:47.804765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.768 ms 00:27:36.392 [2024-11-29 16:09:47.804773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.392 [2024-11-29 16:09:47.804827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.392 [2024-11-29 16:09:47.804836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:36.392 [2024-11-29 16:09:47.804844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:36.392 [2024-11-29 16:09:47.804851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.392 [2024-11-29 16:09:47.809925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.392 [2024-11-29 16:09:47.809953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:36.392 [2024-11-29 16:09:47.809962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.014 ms 00:27:36.392 [2024-11-29 16:09:47.809984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.392 [2024-11-29 16:09:47.810018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.392 [2024-11-29 16:09:47.810026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:36.392 [2024-11-29 16:09:47.810033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:36.392 [2024-11-29 16:09:47.810040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.392 [2024-11-29 16:09:47.810083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.392 [2024-11-29 16:09:47.810092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:36.392 [2024-11-29 16:09:47.810099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:36.392 [2024-11-29 16:09:47.810106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.392 [2024-11-29 16:09:47.810132] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:36.392 [2024-11-29 16:09:47.813580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.392 [2024-11-29 16:09:47.813607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:36.392 [2024-11-29 16:09:47.813618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.456 ms 00:27:36.392 [2024-11-29 16:09:47.813625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.392 [2024-11-29 16:09:47.813654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.392 [2024-11-29 16:09:47.813662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:36.392 [2024-11-29 16:09:47.813670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:36.392 [2024-11-29 16:09:47.813677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.392 [2024-11-29 16:09:47.813705] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:36.392 [2024-11-29 16:09:47.813723] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:36.392 [2024-11-29 16:09:47.813754] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:36.392 [2024-11-29 16:09:47.813770] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:36.392 [2024-11-29 16:09:47.813843] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:36.392 [2024-11-29 16:09:47.813852] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:36.392 [2024-11-29 16:09:47.813861] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:36.392 [2024-11-29 16:09:47.813871] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:36.392 [2024-11-29 16:09:47.813879] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:36.392 [2024-11-29 16:09:47.813887] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:36.392 [2024-11-29 16:09:47.813896] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:36.392 [2024-11-29 16:09:47.813903] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:36.392 [2024-11-29 16:09:47.813912] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:36.392 [2024-11-29 16:09:47.813919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.392 [2024-11-29 16:09:47.813925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:36.392 [2024-11-29 16:09:47.813933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.224 ms 00:27:36.392 [2024-11-29 16:09:47.813939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.393 [2024-11-29 16:09:47.814010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.393 [2024-11-29 16:09:47.814018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:36.393 [2024-11-29 16:09:47.814025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:27:36.393 [2024-11-29 16:09:47.814032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.393 [2024-11-29 16:09:47.814116] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:36.393 [2024-11-29 16:09:47.814126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:36.393 [2024-11-29 16:09:47.814134] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:36.393 [2024-11-29 16:09:47.814142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.393 [2024-11-29 16:09:47.814149] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:36.393 [2024-11-29 16:09:47.814156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:36.393 [2024-11-29 16:09:47.814163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:36.393 [2024-11-29 16:09:47.814169] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:36.393 [2024-11-29 16:09:47.814177] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:36.393 [2024-11-29 16:09:47.814184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.393 [2024-11-29 16:09:47.814190] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:36.393 [2024-11-29 16:09:47.814196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:36.393 [2024-11-29 16:09:47.814202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.393 [2024-11-29 16:09:47.814209] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:36.393 [2024-11-29 16:09:47.814216] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:36.393 [2024-11-29 16:09:47.814222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.393 [2024-11-29 16:09:47.814228] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:36.393 [2024-11-29 16:09:47.814234] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:36.393 [2024-11-29 16:09:47.814240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.393 [2024-11-29 16:09:47.814246] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:36.393 [2024-11-29 16:09:47.814253] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:36.393 [2024-11-29 16:09:47.814260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:36.393 [2024-11-29 16:09:47.814266] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:36.393 [2024-11-29 16:09:47.814273] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:36.393 [2024-11-29 16:09:47.814279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:36.393 [2024-11-29 16:09:47.814285] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:36.393 [2024-11-29 16:09:47.814291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:36.393 [2024-11-29 16:09:47.814298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:36.393 [2024-11-29 16:09:47.814304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:36.393 [2024-11-29 16:09:47.814310] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:36.393 [2024-11-29 16:09:47.814316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:36.393 [2024-11-29 16:09:47.814323] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:36.393 [2024-11-29 16:09:47.814329] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:36.393 [2024-11-29 16:09:47.814335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:36.393 [2024-11-29 16:09:47.814341] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:36.393 [2024-11-29 16:09:47.814347] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:36.393 [2024-11-29 16:09:47.814353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.393 [2024-11-29 16:09:47.814360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:36.393 [2024-11-29 16:09:47.814366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:36.393 [2024-11-29 16:09:47.814373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.393 [2024-11-29 16:09:47.814379] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:36.393 [2024-11-29 16:09:47.814387] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:36.393 [2024-11-29 16:09:47.814394] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:36.393 [2024-11-29 16:09:47.814400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.393 [2024-11-29 16:09:47.814407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:36.393 [2024-11-29 16:09:47.814414] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:36.393 [2024-11-29 16:09:47.814420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:36.393 [2024-11-29 16:09:47.814427] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:36.393 [2024-11-29 16:09:47.814433] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:36.393 [2024-11-29 16:09:47.814439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:36.393 [2024-11-29 16:09:47.814446] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:36.393 [2024-11-29 16:09:47.814455] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:36.393 [2024-11-29 16:09:47.814466] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:36.393 [2024-11-29 16:09:47.814473] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:36.393 [2024-11-29 16:09:47.814480] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:36.393 [2024-11-29 16:09:47.814487] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:36.393 [2024-11-29 16:09:47.814493] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:36.393 [2024-11-29 16:09:47.814506] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:36.393 [2024-11-29 16:09:47.814513] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:36.393 [2024-11-29 16:09:47.814519] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:36.393 [2024-11-29 16:09:47.814526] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:36.393 [2024-11-29 16:09:47.814533] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:36.393 [2024-11-29 16:09:47.814541] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:36.393 [2024-11-29 16:09:47.814548] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:36.393 [2024-11-29 16:09:47.814555] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:36.393 [2024-11-29 16:09:47.814561] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:36.393 [2024-11-29 16:09:47.814569] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:36.393 [2024-11-29 16:09:47.814577] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:36.393 [2024-11-29 16:09:47.814584] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:36.393 [2024-11-29 16:09:47.814591] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:36.393 [2024-11-29 16:09:47.814598] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:36.393 [2024-11-29 16:09:47.814605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.393 [2024-11-29 16:09:47.814615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:36.393 [2024-11-29 16:09:47.814623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.531 ms 00:27:36.393 [2024-11-29 16:09:47.814629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.829689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.829736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:36.655 [2024-11-29 16:09:47.829748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.014 ms 00:27:36.655 [2024-11-29 16:09:47.829755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.829795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.829803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:36.655 [2024-11-29 16:09:47.829811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:36.655 [2024-11-29 16:09:47.829818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.860358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.860391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:36.655 [2024-11-29 16:09:47.860402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.488 ms 00:27:36.655 [2024-11-29 16:09:47.860409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.860435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.860443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:36.655 [2024-11-29 16:09:47.860451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:36.655 [2024-11-29 16:09:47.860458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.860828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.860871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:36.655 [2024-11-29 16:09:47.860880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.323 ms 00:27:36.655 [2024-11-29 16:09:47.860887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.860927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.860935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:36.655 [2024-11-29 16:09:47.860943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:36.655 [2024-11-29 16:09:47.860950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.875880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.875910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:36.655 [2024-11-29 16:09:47.875920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.897 ms 00:27:36.655 [2024-11-29 16:09:47.875927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.888890] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:36.655 [2024-11-29 16:09:47.888924] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:36.655 [2024-11-29 16:09:47.888935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.888942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:36.655 [2024-11-29 16:09:47.888951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.901 ms 00:27:36.655 [2024-11-29 16:09:47.888963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.902932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.902963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:36.655 [2024-11-29 16:09:47.902983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.922 ms 00:27:36.655 [2024-11-29 16:09:47.902991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.914427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.914458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:36.655 [2024-11-29 16:09:47.914467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.397 ms 00:27:36.655 [2024-11-29 16:09:47.914474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.926088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.926121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:36.655 [2024-11-29 16:09:47.926132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.579 ms 00:27:36.655 [2024-11-29 16:09:47.926140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.926501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.926519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:36.655 [2024-11-29 16:09:47.926527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:27:36.655 [2024-11-29 16:09:47.926535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.986780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.986827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:36.655 [2024-11-29 16:09:47.986840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 60.225 ms 00:27:36.655 [2024-11-29 16:09:47.986848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.997708] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:36.655 [2024-11-29 16:09:47.998608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.998647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:36.655 [2024-11-29 16:09:47.998658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.703 ms 00:27:36.655 [2024-11-29 16:09:47.998672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.998741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.998751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:36.655 [2024-11-29 16:09:47.998760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:36.655 [2024-11-29 16:09:47.998768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:47.998822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:47.998833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:36.655 [2024-11-29 16:09:47.998842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:36.655 [2024-11-29 16:09:47.998849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:48.000285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:48.000326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:36.655 [2024-11-29 16:09:48.000337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.411 ms 00:27:36.655 [2024-11-29 16:09:48.000344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:48.000382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:48.000391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:36.655 [2024-11-29 16:09:48.000400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:36.655 [2024-11-29 16:09:48.000408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:48.000446] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:36.655 [2024-11-29 16:09:48.000456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:48.000467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:36.655 [2024-11-29 16:09:48.000477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:36.655 [2024-11-29 16:09:48.000485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:48.025923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:48.025983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:36.655 [2024-11-29 16:09:48.025997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.412 ms 00:27:36.655 [2024-11-29 16:09:48.026005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:48.026096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.655 [2024-11-29 16:09:48.026107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:36.655 [2024-11-29 16:09:48.026116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:36.655 [2024-11-29 16:09:48.026124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.655 [2024-11-29 16:09:48.027366] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 236.946 ms, result 0 00:27:36.655 [2024-11-29 16:09:48.042329] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.655 [2024-11-29 16:09:48.058334] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:36.655 [2024-11-29 16:09:48.066492] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:37.593 16:09:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:37.593 16:09:48 -- common/autotest_common.sh@862 -- # return 0 00:27:37.593 16:09:48 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:37.593 16:09:48 -- ftl/common.sh@95 -- # return 0 00:27:37.593 16:09:48 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:37.593 [2024-11-29 16:09:48.919946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.593 [2024-11-29 16:09:48.919987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:37.593 [2024-11-29 16:09:48.919997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:37.593 [2024-11-29 16:09:48.920004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.593 [2024-11-29 16:09:48.920021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.593 [2024-11-29 16:09:48.920028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:37.593 [2024-11-29 16:09:48.920034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:37.593 [2024-11-29 16:09:48.920043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.593 [2024-11-29 16:09:48.920058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.593 [2024-11-29 16:09:48.920064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:37.593 [2024-11-29 16:09:48.920070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:37.593 [2024-11-29 16:09:48.920075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.593 [2024-11-29 16:09:48.920115] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.164 ms, result 0 00:27:37.593 true 00:27:37.593 16:09:48 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:37.851 { 00:27:37.851 "name": "ftl", 00:27:37.851 "properties": [ 00:27:37.851 { 00:27:37.851 "name": "superblock_version", 00:27:37.851 "value": 5, 00:27:37.851 "read-only": true 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "name": "base_device", 00:27:37.851 "bands": [ 00:27:37.851 { 00:27:37.851 "id": 0, 00:27:37.851 "state": "CLOSED", 00:27:37.851 "validity": 1.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 1, 00:27:37.851 "state": "CLOSED", 00:27:37.851 "validity": 1.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 2, 00:27:37.851 "state": "CLOSED", 00:27:37.851 "validity": 0.007843137254901933 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 3, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 4, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 5, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 6, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 7, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 8, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 9, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 10, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 11, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 12, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 13, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 14, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 15, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 16, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 17, 00:27:37.851 "state": "FREE", 00:27:37.851 "validity": 0.0 00:27:37.851 } 00:27:37.851 ], 00:27:37.851 "read-only": true 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "name": "cache_device", 00:27:37.851 "type": "bdev", 00:27:37.851 "chunks": [ 00:27:37.851 { 00:27:37.851 "id": 0, 00:27:37.851 "state": "OPEN", 00:27:37.851 "utilization": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 1, 00:27:37.851 "state": "OPEN", 00:27:37.851 "utilization": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 2, 00:27:37.851 "state": "FREE", 00:27:37.851 "utilization": 0.0 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "id": 3, 00:27:37.851 "state": "FREE", 00:27:37.851 "utilization": 0.0 00:27:37.851 } 00:27:37.851 ], 00:27:37.851 "read-only": true 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "name": "verbose_mode", 00:27:37.851 "value": true, 00:27:37.851 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:37.851 }, 00:27:37.851 { 00:27:37.851 "name": "prep_upgrade_on_shutdown", 00:27:37.851 "value": false, 00:27:37.851 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:37.851 } 00:27:37.851 ] 00:27:37.851 } 00:27:37.851 16:09:49 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:37.852 16:09:49 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:37.852 16:09:49 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:38.110 16:09:49 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:38.110 16:09:49 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:38.110 16:09:49 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:38.110 16:09:49 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:38.110 16:09:49 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:38.110 16:09:49 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:38.110 16:09:49 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:38.110 16:09:49 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:38.110 16:09:49 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:38.110 16:09:49 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:38.110 16:09:49 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:38.110 16:09:49 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:38.110 Validate MD5 checksum, iteration 1 00:27:38.110 16:09:49 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:38.110 16:09:49 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:38.110 16:09:49 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:38.110 16:09:49 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:38.110 16:09:49 -- ftl/common.sh@154 -- # return 0 00:27:38.110 16:09:49 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:38.368 [2024-11-29 16:09:49.556796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:38.368 [2024-11-29 16:09:49.556900] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79287 ] 00:27:38.368 [2024-11-29 16:09:49.706757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.626 [2024-11-29 16:09:49.899598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.007  [2024-11-29T16:09:52.381Z] Copying: 624/1024 [MB] (624 MBps) [2024-11-29T16:09:53.759Z] Copying: 1024/1024 [MB] (average 557 MBps) 00:27:42.328 00:27:42.328 16:09:53 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:42.328 16:09:53 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:44.229 16:09:55 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:44.229 Validate MD5 checksum, iteration 2 00:27:44.229 16:09:55 -- ftl/upgrade_shutdown.sh@103 -- # sum=764352b4387f108a2d12d57d519c06c0 00:27:44.229 16:09:55 -- ftl/upgrade_shutdown.sh@105 -- # [[ 764352b4387f108a2d12d57d519c06c0 != \7\6\4\3\5\2\b\4\3\8\7\f\1\0\8\a\2\d\1\2\d\5\7\d\5\1\9\c\0\6\c\0 ]] 00:27:44.229 16:09:55 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:44.229 16:09:55 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:44.229 16:09:55 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:44.230 16:09:55 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:44.230 16:09:55 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:44.230 16:09:55 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:44.230 16:09:55 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:44.230 16:09:55 -- ftl/common.sh@154 -- # return 0 00:27:44.230 16:09:55 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:44.230 [2024-11-29 16:09:55.617183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:44.230 [2024-11-29 16:09:55.617414] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79348 ] 00:27:44.488 [2024-11-29 16:09:55.766671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.746 [2024-11-29 16:09:55.957393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.131  [2024-11-29T16:09:58.507Z] Copying: 587/1024 [MB] (587 MBps) [2024-11-29T16:09:59.449Z] Copying: 1024/1024 [MB] (average 593 MBps) 00:27:48.018 00:27:48.018 16:09:59 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:48.018 16:09:59 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:49.930 16:10:00 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:49.930 16:10:00 -- ftl/upgrade_shutdown.sh@103 -- # sum=390cdfa3d7c658202967a4f828919810 00:27:49.930 16:10:00 -- ftl/upgrade_shutdown.sh@105 -- # [[ 390cdfa3d7c658202967a4f828919810 != \3\9\0\c\d\f\a\3\d\7\c\6\5\8\2\0\2\9\6\7\a\4\f\8\2\8\9\1\9\8\1\0 ]] 00:27:49.930 16:10:00 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:49.930 16:10:00 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:49.930 16:10:00 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:49.930 16:10:00 -- ftl/common.sh@137 -- # [[ -n 79235 ]] 00:27:49.930 16:10:00 -- ftl/common.sh@138 -- # kill -9 79235 00:27:49.930 16:10:00 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:49.930 16:10:00 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:49.930 16:10:00 -- ftl/common.sh@81 -- # local base_bdev= 00:27:49.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.930 16:10:00 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:49.930 16:10:00 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:49.930 16:10:00 -- ftl/common.sh@89 -- # spdk_tgt_pid=79410 00:27:49.930 16:10:00 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:49.930 16:10:00 -- ftl/common.sh@91 -- # waitforlisten 79410 00:27:49.930 16:10:00 -- common/autotest_common.sh@829 -- # '[' -z 79410 ']' 00:27:49.930 16:10:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.930 16:10:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:49.930 16:10:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.930 16:10:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:49.930 16:10:00 -- common/autotest_common.sh@10 -- # set +x 00:27:49.930 16:10:00 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:49.930 [2024-11-29 16:10:00.928957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:49.930 [2024-11-29 16:10:00.929099] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79410 ] 00:27:49.930 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 79235 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:49.930 [2024-11-29 16:10:01.080737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.930 [2024-11-29 16:10:01.240313] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:49.930 [2024-11-29 16:10:01.240461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.499 [2024-11-29 16:10:01.767377] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:50.499 [2024-11-29 16:10:01.767425] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:50.499 [2024-11-29 16:10:01.907776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.499 [2024-11-29 16:10:01.907819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:50.499 [2024-11-29 16:10:01.907832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:50.499 [2024-11-29 16:10:01.907840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.499 [2024-11-29 16:10:01.907893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.499 [2024-11-29 16:10:01.907906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:50.499 [2024-11-29 16:10:01.907914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:50.499 [2024-11-29 16:10:01.907921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.499 [2024-11-29 16:10:01.907940] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:50.499 [2024-11-29 16:10:01.908679] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:50.499 [2024-11-29 16:10:01.908701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.499 [2024-11-29 16:10:01.908708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:50.499 [2024-11-29 16:10:01.908716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.765 ms 00:27:50.499 [2024-11-29 16:10:01.908723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.499 [2024-11-29 16:10:01.908967] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:50.499 [2024-11-29 16:10:01.925667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.499 [2024-11-29 16:10:01.925708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:50.499 [2024-11-29 16:10:01.925720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.700 ms 00:27:50.499 [2024-11-29 16:10:01.925727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.762 [2024-11-29 16:10:01.934503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.762 [2024-11-29 16:10:01.934536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:50.762 [2024-11-29 16:10:01.934545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:50.762 [2024-11-29 16:10:01.934553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.762 [2024-11-29 16:10:01.934870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.762 [2024-11-29 16:10:01.934882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:50.762 [2024-11-29 16:10:01.934891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.243 ms 00:27:50.762 [2024-11-29 16:10:01.934898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.762 [2024-11-29 16:10:01.934931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.762 [2024-11-29 16:10:01.934940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:50.762 [2024-11-29 16:10:01.934947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:50.762 [2024-11-29 16:10:01.934957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.762 [2024-11-29 16:10:01.935009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.762 [2024-11-29 16:10:01.935018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:50.762 [2024-11-29 16:10:01.935026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:50.763 [2024-11-29 16:10:01.935034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.763 [2024-11-29 16:10:01.935057] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:50.763 [2024-11-29 16:10:01.938245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.763 [2024-11-29 16:10:01.938275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:50.763 [2024-11-29 16:10:01.938284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.196 ms 00:27:50.763 [2024-11-29 16:10:01.938291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.763 [2024-11-29 16:10:01.938323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.763 [2024-11-29 16:10:01.938332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:50.763 [2024-11-29 16:10:01.938342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:50.763 [2024-11-29 16:10:01.938349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.763 [2024-11-29 16:10:01.938369] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:50.763 [2024-11-29 16:10:01.938387] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:50.763 [2024-11-29 16:10:01.938421] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:50.763 [2024-11-29 16:10:01.938436] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:50.763 [2024-11-29 16:10:01.938510] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:50.763 [2024-11-29 16:10:01.938522] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:50.763 [2024-11-29 16:10:01.938535] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:50.763 [2024-11-29 16:10:01.938545] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:50.763 [2024-11-29 16:10:01.938554] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:50.763 [2024-11-29 16:10:01.938562] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:50.763 [2024-11-29 16:10:01.938569] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:50.763 [2024-11-29 16:10:01.938576] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:50.763 [2024-11-29 16:10:01.938583] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:50.763 [2024-11-29 16:10:01.938591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.763 [2024-11-29 16:10:01.938598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:50.763 [2024-11-29 16:10:01.938605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.224 ms 00:27:50.763 [2024-11-29 16:10:01.938614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.763 [2024-11-29 16:10:01.938676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.763 [2024-11-29 16:10:01.938684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:50.763 [2024-11-29 16:10:01.938691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:27:50.763 [2024-11-29 16:10:01.938698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.763 [2024-11-29 16:10:01.938783] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:50.763 [2024-11-29 16:10:01.938793] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:50.763 [2024-11-29 16:10:01.938801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:50.763 [2024-11-29 16:10:01.938808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.763 [2024-11-29 16:10:01.938819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:50.763 [2024-11-29 16:10:01.938825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:50.763 [2024-11-29 16:10:01.938833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:50.763 [2024-11-29 16:10:01.938839] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:50.763 [2024-11-29 16:10:01.938847] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:50.763 [2024-11-29 16:10:01.938853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.763 [2024-11-29 16:10:01.938863] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:50.763 [2024-11-29 16:10:01.938870] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:50.763 [2024-11-29 16:10:01.938877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.763 [2024-11-29 16:10:01.938883] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:50.763 [2024-11-29 16:10:01.938890] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:50.763 [2024-11-29 16:10:01.938897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.763 [2024-11-29 16:10:01.938903] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:50.763 [2024-11-29 16:10:01.938909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:50.763 [2024-11-29 16:10:01.938915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.763 [2024-11-29 16:10:01.938921] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:50.763 [2024-11-29 16:10:01.938927] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:50.763 [2024-11-29 16:10:01.938934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:50.763 [2024-11-29 16:10:01.938940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:50.763 [2024-11-29 16:10:01.938947] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:50.763 [2024-11-29 16:10:01.938953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:50.763 [2024-11-29 16:10:01.938959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:50.763 [2024-11-29 16:10:01.938966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:50.763 [2024-11-29 16:10:01.938984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:50.763 [2024-11-29 16:10:01.938991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:50.763 [2024-11-29 16:10:01.938997] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:50.763 [2024-11-29 16:10:01.939004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:50.763 [2024-11-29 16:10:01.939010] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:50.763 [2024-11-29 16:10:01.939016] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:50.763 [2024-11-29 16:10:01.939022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:50.763 [2024-11-29 16:10:01.939029] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:50.763 [2024-11-29 16:10:01.939036] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:50.763 [2024-11-29 16:10:01.939042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.763 [2024-11-29 16:10:01.939048] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:50.763 [2024-11-29 16:10:01.939055] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:50.763 [2024-11-29 16:10:01.939062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.763 [2024-11-29 16:10:01.939068] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:50.763 [2024-11-29 16:10:01.939075] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:50.763 [2024-11-29 16:10:01.939084] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:50.763 [2024-11-29 16:10:01.939092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.763 [2024-11-29 16:10:01.939100] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:50.763 [2024-11-29 16:10:01.939107] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:50.763 [2024-11-29 16:10:01.939114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:50.763 [2024-11-29 16:10:01.939121] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:50.763 [2024-11-29 16:10:01.939128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:50.763 [2024-11-29 16:10:01.939135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:50.763 [2024-11-29 16:10:01.939142] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:50.763 [2024-11-29 16:10:01.939151] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:50.763 [2024-11-29 16:10:01.939159] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:50.763 [2024-11-29 16:10:01.939167] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:50.763 [2024-11-29 16:10:01.939174] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:50.763 [2024-11-29 16:10:01.939186] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:50.763 [2024-11-29 16:10:01.939194] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:50.763 [2024-11-29 16:10:01.939201] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:50.763 [2024-11-29 16:10:01.939208] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:50.763 [2024-11-29 16:10:01.939215] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:50.763 [2024-11-29 16:10:01.939222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:50.763 [2024-11-29 16:10:01.939229] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:50.763 [2024-11-29 16:10:01.939236] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:50.763 [2024-11-29 16:10:01.939243] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:50.763 [2024-11-29 16:10:01.939251] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:50.763 [2024-11-29 16:10:01.939257] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:50.763 [2024-11-29 16:10:01.939265] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:50.764 [2024-11-29 16:10:01.939273] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:50.764 [2024-11-29 16:10:01.939280] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:50.764 [2024-11-29 16:10:01.939287] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:50.764 [2024-11-29 16:10:01.939294] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:50.764 [2024-11-29 16:10:01.939302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:01.939309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:50.764 [2024-11-29 16:10:01.939316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.563 ms 00:27:50.764 [2024-11-29 16:10:01.939326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:01.953569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:01.953730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:50.764 [2024-11-29 16:10:01.953899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.197 ms 00:27:50.764 [2024-11-29 16:10:01.953936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:01.954014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:01.954127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:50.764 [2024-11-29 16:10:01.954261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:50.764 [2024-11-29 16:10:01.954285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:01.987355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:01.987509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:50.764 [2024-11-29 16:10:01.987568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 33.003 ms 00:27:50.764 [2024-11-29 16:10:01.987591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:01.987651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:01.987674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:50.764 [2024-11-29 16:10:01.987695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:50.764 [2024-11-29 16:10:01.987714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:01.987829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:01.987918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:50.764 [2024-11-29 16:10:01.987942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:50.764 [2024-11-29 16:10:01.987962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:01.988053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:01.988080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:50.764 [2024-11-29 16:10:01.988100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:50.764 [2024-11-29 16:10:01.988120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:02.006400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:02.006550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:50.764 [2024-11-29 16:10:02.006612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.197 ms 00:27:50.764 [2024-11-29 16:10:02.006638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:02.006773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:02.006801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:50.764 [2024-11-29 16:10:02.006822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:50.764 [2024-11-29 16:10:02.006841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:02.025740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:02.025899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:50.764 [2024-11-29 16:10:02.025963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.868 ms 00:27:50.764 [2024-11-29 16:10:02.026002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:02.035794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:02.035935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:50.764 [2024-11-29 16:10:02.036022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.281 ms 00:27:50.764 [2024-11-29 16:10:02.036048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:02.100620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:02.100815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:50.764 [2024-11-29 16:10:02.100878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 64.495 ms 00:27:50.764 [2024-11-29 16:10:02.100902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:02.101019] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:50.764 [2024-11-29 16:10:02.101090] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:50.764 [2024-11-29 16:10:02.101149] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:50.764 [2024-11-29 16:10:02.101320] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:50.764 [2024-11-29 16:10:02.101350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:02.101370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:50.764 [2024-11-29 16:10:02.101435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.394 ms 00:27:50.764 [2024-11-29 16:10:02.101463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:02.101538] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:50.764 [2024-11-29 16:10:02.101573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:02.101592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:50.764 [2024-11-29 16:10:02.101662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:50.764 [2024-11-29 16:10:02.101685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:02.118385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:02.118540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:50.764 [2024-11-29 16:10:02.118601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.647 ms 00:27:50.764 [2024-11-29 16:10:02.118623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:02.127545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:02.127678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:50.764 [2024-11-29 16:10:02.127730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:50.764 [2024-11-29 16:10:02.127753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:02.127834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.764 [2024-11-29 16:10:02.127860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:27:50.764 [2024-11-29 16:10:02.127880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:50.764 [2024-11-29 16:10:02.127898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.764 [2024-11-29 16:10:02.128150] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:27:51.709 [2024-11-29 16:10:02.795529] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:27:51.709 [2024-11-29 16:10:02.796006] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:27:51.970 [2024-11-29 16:10:03.387810] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:27:51.970 [2024-11-29 16:10:03.387927] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:51.970 [2024-11-29 16:10:03.387941] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:51.970 [2024-11-29 16:10:03.387953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.970 [2024-11-29 16:10:03.387964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:51.970 [2024-11-29 16:10:03.387998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1260.019 ms 00:27:51.970 [2024-11-29 16:10:03.388007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.970 [2024-11-29 16:10:03.388056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.970 [2024-11-29 16:10:03.388065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:51.970 [2024-11-29 16:10:03.388074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:51.970 [2024-11-29 16:10:03.388082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.232 [2024-11-29 16:10:03.400463] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:52.232 [2024-11-29 16:10:03.400595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.232 [2024-11-29 16:10:03.400606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:52.232 [2024-11-29 16:10:03.400617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.496 ms 00:27:52.232 [2024-11-29 16:10:03.400625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.233 [2024-11-29 16:10:03.401363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.233 [2024-11-29 16:10:03.401392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:27:52.233 [2024-11-29 16:10:03.401401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.667 ms 00:27:52.233 [2024-11-29 16:10:03.401409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.233 [2024-11-29 16:10:03.403643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.233 [2024-11-29 16:10:03.403664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:52.233 [2024-11-29 16:10:03.403674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.216 ms 00:27:52.233 [2024-11-29 16:10:03.403683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.233 [2024-11-29 16:10:03.429921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.233 [2024-11-29 16:10:03.429987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:27:52.233 [2024-11-29 16:10:03.430000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.213 ms 00:27:52.233 [2024-11-29 16:10:03.430009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.233 [2024-11-29 16:10:03.430127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.233 [2024-11-29 16:10:03.430140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:52.233 [2024-11-29 16:10:03.430150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:52.233 [2024-11-29 16:10:03.430158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.233 [2024-11-29 16:10:03.431593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.233 [2024-11-29 16:10:03.431641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:52.233 [2024-11-29 16:10:03.431652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.415 ms 00:27:52.233 [2024-11-29 16:10:03.431660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.233 [2024-11-29 16:10:03.431698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.233 [2024-11-29 16:10:03.431707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:52.233 [2024-11-29 16:10:03.431716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:52.233 [2024-11-29 16:10:03.431724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.233 [2024-11-29 16:10:03.431773] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:52.233 [2024-11-29 16:10:03.431784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.233 [2024-11-29 16:10:03.431793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:52.233 [2024-11-29 16:10:03.431804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:52.233 [2024-11-29 16:10:03.431812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.233 [2024-11-29 16:10:03.431870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.233 [2024-11-29 16:10:03.431880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:52.233 [2024-11-29 16:10:03.431888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:52.233 [2024-11-29 16:10:03.431896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.233 [2024-11-29 16:10:03.433024] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1524.720 ms, result 0 00:27:52.233 [2024-11-29 16:10:03.446317] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.233 [2024-11-29 16:10:03.462316] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:52.233 [2024-11-29 16:10:03.470511] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:52.494 Validate MD5 checksum, iteration 1 00:27:52.495 16:10:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:52.495 16:10:03 -- common/autotest_common.sh@862 -- # return 0 00:27:52.495 16:10:03 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:52.495 16:10:03 -- ftl/common.sh@95 -- # return 0 00:27:52.495 16:10:03 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:52.495 16:10:03 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:52.495 16:10:03 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:52.495 16:10:03 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:52.495 16:10:03 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:52.495 16:10:03 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:52.495 16:10:03 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:52.495 16:10:03 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:52.495 16:10:03 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:52.495 16:10:03 -- ftl/common.sh@154 -- # return 0 00:27:52.495 16:10:03 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:52.757 [2024-11-29 16:10:03.943507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:52.757 [2024-11-29 16:10:03.943786] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79454 ] 00:27:52.757 [2024-11-29 16:10:04.089220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.018 [2024-11-29 16:10:04.341149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.930  [2024-11-29T16:10:06.621Z] Copying: 619/1024 [MB] (619 MBps) [2024-11-29T16:10:08.007Z] Copying: 1024/1024 [MB] (average 639 MBps) 00:27:56.576 00:27:56.576 16:10:07 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:56.576 16:10:07 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:58.477 16:10:09 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:58.477 16:10:09 -- ftl/upgrade_shutdown.sh@103 -- # sum=764352b4387f108a2d12d57d519c06c0 00:27:58.477 16:10:09 -- ftl/upgrade_shutdown.sh@105 -- # [[ 764352b4387f108a2d12d57d519c06c0 != \7\6\4\3\5\2\b\4\3\8\7\f\1\0\8\a\2\d\1\2\d\5\7\d\5\1\9\c\0\6\c\0 ]] 00:27:58.477 16:10:09 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:58.477 16:10:09 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:58.477 16:10:09 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:58.477 Validate MD5 checksum, iteration 2 00:27:58.735 16:10:09 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:58.735 16:10:09 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:58.735 16:10:09 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:58.735 16:10:09 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:58.735 16:10:09 -- ftl/common.sh@154 -- # return 0 00:27:58.735 16:10:09 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:58.735 [2024-11-29 16:10:09.961783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:58.735 [2024-11-29 16:10:09.962006] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79524 ] 00:27:58.735 [2024-11-29 16:10:10.104260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.993 [2024-11-29 16:10:10.270234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.373  [2024-11-29T16:10:12.376Z] Copying: 679/1024 [MB] (679 MBps) [2024-11-29T16:10:17.650Z] Copying: 1024/1024 [MB] (average 672 MBps) 00:28:06.219 00:28:06.219 16:10:16 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:06.219 16:10:16 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:07.616 16:10:18 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:07.616 16:10:18 -- ftl/upgrade_shutdown.sh@103 -- # sum=390cdfa3d7c658202967a4f828919810 00:28:07.616 16:10:18 -- ftl/upgrade_shutdown.sh@105 -- # [[ 390cdfa3d7c658202967a4f828919810 != \3\9\0\c\d\f\a\3\d\7\c\6\5\8\2\0\2\9\6\7\a\4\f\8\2\8\9\1\9\8\1\0 ]] 00:28:07.616 16:10:18 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:07.616 16:10:18 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:07.616 16:10:18 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:07.616 16:10:18 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:07.616 16:10:18 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:07.616 16:10:18 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:07.616 16:10:18 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:07.616 16:10:18 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:07.616 16:10:18 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:07.616 16:10:18 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:07.616 16:10:18 -- ftl/common.sh@130 -- # [[ -n 79410 ]] 00:28:07.616 16:10:18 -- ftl/common.sh@131 -- # killprocess 79410 00:28:07.616 16:10:18 -- common/autotest_common.sh@936 -- # '[' -z 79410 ']' 00:28:07.616 16:10:18 -- common/autotest_common.sh@940 -- # kill -0 79410 00:28:07.616 16:10:18 -- common/autotest_common.sh@941 -- # uname 00:28:07.616 16:10:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:07.616 16:10:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79410 00:28:07.616 killing process with pid 79410 00:28:07.616 16:10:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:07.616 16:10:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:07.616 16:10:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79410' 00:28:07.616 16:10:18 -- common/autotest_common.sh@955 -- # kill 79410 00:28:07.616 16:10:18 -- common/autotest_common.sh@960 -- # wait 79410 00:28:08.184 [2024-11-29 16:10:19.401545] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:28:08.184 [2024-11-29 16:10:19.413286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.184 [2024-11-29 16:10:19.413320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:08.184 [2024-11-29 16:10:19.413331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:08.184 [2024-11-29 16:10:19.413337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.184 [2024-11-29 16:10:19.413354] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:08.184 [2024-11-29 16:10:19.415489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.184 [2024-11-29 16:10:19.415514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:08.184 [2024-11-29 16:10:19.415522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.124 ms 00:28:08.184 [2024-11-29 16:10:19.415529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.184 [2024-11-29 16:10:19.415723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.184 [2024-11-29 16:10:19.415734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:08.184 [2024-11-29 16:10:19.415741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:28:08.184 [2024-11-29 16:10:19.415747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.184 [2024-11-29 16:10:19.416852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.184 [2024-11-29 16:10:19.416870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:08.184 [2024-11-29 16:10:19.416876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.092 ms 00:28:08.184 [2024-11-29 16:10:19.416882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.184 [2024-11-29 16:10:19.417750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.184 [2024-11-29 16:10:19.417766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:28:08.184 [2024-11-29 16:10:19.417777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.845 ms 00:28:08.184 [2024-11-29 16:10:19.417784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.184 [2024-11-29 16:10:19.425457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.184 [2024-11-29 16:10:19.425489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:08.184 [2024-11-29 16:10:19.425497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.645 ms 00:28:08.184 [2024-11-29 16:10:19.425503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.184 [2024-11-29 16:10:19.429546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.184 [2024-11-29 16:10:19.429575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:08.184 [2024-11-29 16:10:19.429583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.014 ms 00:28:08.184 [2024-11-29 16:10:19.429589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.184 [2024-11-29 16:10:19.429653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.184 [2024-11-29 16:10:19.429660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:08.184 [2024-11-29 16:10:19.429667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:08.184 [2024-11-29 16:10:19.429672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.184 [2024-11-29 16:10:19.437248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.184 [2024-11-29 16:10:19.437270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:08.184 [2024-11-29 16:10:19.437277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.563 ms 00:28:08.184 [2024-11-29 16:10:19.437282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.184 [2024-11-29 16:10:19.444719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.184 [2024-11-29 16:10:19.444740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:08.184 [2024-11-29 16:10:19.444747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.411 ms 00:28:08.184 [2024-11-29 16:10:19.444752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.184 [2024-11-29 16:10:19.452068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.184 [2024-11-29 16:10:19.452090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:08.184 [2024-11-29 16:10:19.452097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.290 ms 00:28:08.184 [2024-11-29 16:10:19.452102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.184 [2024-11-29 16:10:19.459449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.184 [2024-11-29 16:10:19.459471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:08.184 [2024-11-29 16:10:19.459477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.299 ms 00:28:08.184 [2024-11-29 16:10:19.459482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.184 [2024-11-29 16:10:19.459508] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:08.184 [2024-11-29 16:10:19.459519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:08.184 [2024-11-29 16:10:19.459526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:08.184 [2024-11-29 16:10:19.459532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:08.184 [2024-11-29 16:10:19.459539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:08.184 [2024-11-29 16:10:19.459544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:08.184 [2024-11-29 16:10:19.459550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:08.184 [2024-11-29 16:10:19.459555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:08.184 [2024-11-29 16:10:19.459561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:08.184 [2024-11-29 16:10:19.459567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:08.184 [2024-11-29 16:10:19.459572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:08.184 [2024-11-29 16:10:19.459577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:08.184 [2024-11-29 16:10:19.459583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:08.185 [2024-11-29 16:10:19.459588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:08.185 [2024-11-29 16:10:19.459594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:08.185 [2024-11-29 16:10:19.459605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:08.185 [2024-11-29 16:10:19.459611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:08.185 [2024-11-29 16:10:19.459616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:08.185 [2024-11-29 16:10:19.459622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:08.185 [2024-11-29 16:10:19.459629] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:08.185 [2024-11-29 16:10:19.459637] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c04498c1-bf25-4145-bf54-17bb86bfc638 00:28:08.185 [2024-11-29 16:10:19.459643] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:08.185 [2024-11-29 16:10:19.459648] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:08.185 [2024-11-29 16:10:19.459654] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:08.185 [2024-11-29 16:10:19.459659] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:08.185 [2024-11-29 16:10:19.459665] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:08.185 [2024-11-29 16:10:19.459670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:08.185 [2024-11-29 16:10:19.459676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:08.185 [2024-11-29 16:10:19.459680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:08.185 [2024-11-29 16:10:19.459685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:08.185 [2024-11-29 16:10:19.459690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.185 [2024-11-29 16:10:19.459697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:08.185 [2024-11-29 16:10:19.459703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:28:08.185 [2024-11-29 16:10:19.459710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.469088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.185 [2024-11-29 16:10:19.469109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:08.185 [2024-11-29 16:10:19.469117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.355 ms 00:28:08.185 [2024-11-29 16:10:19.469124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.469271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.185 [2024-11-29 16:10:19.469277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:08.185 [2024-11-29 16:10:19.469287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.132 ms 00:28:08.185 [2024-11-29 16:10:19.469293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.504602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.185 [2024-11-29 16:10:19.504625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:08.185 [2024-11-29 16:10:19.504633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.185 [2024-11-29 16:10:19.504639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.504664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.185 [2024-11-29 16:10:19.504669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:08.185 [2024-11-29 16:10:19.504679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.185 [2024-11-29 16:10:19.504684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.504730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.185 [2024-11-29 16:10:19.504737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:08.185 [2024-11-29 16:10:19.504743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.185 [2024-11-29 16:10:19.504749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.504762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.185 [2024-11-29 16:10:19.504767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:08.185 [2024-11-29 16:10:19.504773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.185 [2024-11-29 16:10:19.504780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.563722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.185 [2024-11-29 16:10:19.563751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:08.185 [2024-11-29 16:10:19.563761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.185 [2024-11-29 16:10:19.563768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.586722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.185 [2024-11-29 16:10:19.586745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:08.185 [2024-11-29 16:10:19.586752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.185 [2024-11-29 16:10:19.586762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.586803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.185 [2024-11-29 16:10:19.586810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:08.185 [2024-11-29 16:10:19.586816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.185 [2024-11-29 16:10:19.586822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.586854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.185 [2024-11-29 16:10:19.586860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:08.185 [2024-11-29 16:10:19.586866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.185 [2024-11-29 16:10:19.586871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.586942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.185 [2024-11-29 16:10:19.586949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:08.185 [2024-11-29 16:10:19.586955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.185 [2024-11-29 16:10:19.586961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.586998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.185 [2024-11-29 16:10:19.587005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:08.185 [2024-11-29 16:10:19.587011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.185 [2024-11-29 16:10:19.587016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.587047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.185 [2024-11-29 16:10:19.587054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:08.185 [2024-11-29 16:10:19.587060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.185 [2024-11-29 16:10:19.587065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.587097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:08.185 [2024-11-29 16:10:19.587107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:08.185 [2024-11-29 16:10:19.587113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:08.185 [2024-11-29 16:10:19.587119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.185 [2024-11-29 16:10:19.587217] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 173.907 ms, result 0 00:28:09.122 16:10:20 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:09.122 16:10:20 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:09.122 16:10:20 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:09.122 16:10:20 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:09.122 16:10:20 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:09.122 16:10:20 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:09.122 Remove shared memory files 00:28:09.122 16:10:20 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:09.122 16:10:20 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:09.122 16:10:20 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:09.122 16:10:20 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:09.122 16:10:20 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79235 00:28:09.122 16:10:20 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:09.122 16:10:20 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:09.122 00:28:09.122 real 1m21.855s 00:28:09.122 user 1m53.867s 00:28:09.122 sys 0m19.297s 00:28:09.122 16:10:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:09.122 16:10:20 -- common/autotest_common.sh@10 -- # set +x 00:28:09.122 ************************************ 00:28:09.122 END TEST ftl_upgrade_shutdown 00:28:09.122 ************************************ 00:28:09.122 16:10:20 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:28:09.122 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:28:09.122 16:10:20 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:28:09.122 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:28:09.122 16:10:20 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:09.122 16:10:20 -- ftl/ftl.sh@14 -- # killprocess 70386 00:28:09.122 16:10:20 -- common/autotest_common.sh@936 -- # '[' -z 70386 ']' 00:28:09.122 16:10:20 -- common/autotest_common.sh@940 -- # kill -0 70386 00:28:09.122 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70386) - No such process 00:28:09.122 Process with pid 70386 is not found 00:28:09.122 16:10:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70386 is not found' 00:28:09.122 16:10:20 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:28:09.122 16:10:20 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79667 00:28:09.122 16:10:20 -- ftl/ftl.sh@20 -- # waitforlisten 79667 00:28:09.122 16:10:20 -- common/autotest_common.sh@829 -- # '[' -z 79667 ']' 00:28:09.122 16:10:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.122 16:10:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:09.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.122 16:10:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.122 16:10:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:09.122 16:10:20 -- common/autotest_common.sh@10 -- # set +x 00:28:09.122 16:10:20 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:09.122 [2024-11-29 16:10:20.367818] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:09.122 [2024-11-29 16:10:20.367936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79667 ] 00:28:09.122 [2024-11-29 16:10:20.524551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.381 [2024-11-29 16:10:20.664038] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:09.381 [2024-11-29 16:10:20.664197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.316 16:10:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:10.316 16:10:21 -- common/autotest_common.sh@862 -- # return 0 00:28:10.316 16:10:21 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:28:10.576 nvme0n1 00:28:10.576 16:10:21 -- ftl/ftl.sh@22 -- # clear_lvols 00:28:10.576 16:10:21 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:10.576 16:10:21 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:10.834 16:10:22 -- ftl/common.sh@28 -- # stores=08ef6962-0d12-4d0f-850b-5bec12dcf7b2 00:28:10.834 16:10:22 -- ftl/common.sh@29 -- # for lvs in $stores 00:28:10.834 16:10:22 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08ef6962-0d12-4d0f-850b-5bec12dcf7b2 00:28:11.092 16:10:22 -- ftl/ftl.sh@23 -- # killprocess 79667 00:28:11.092 16:10:22 -- common/autotest_common.sh@936 -- # '[' -z 79667 ']' 00:28:11.092 16:10:22 -- common/autotest_common.sh@940 -- # kill -0 79667 00:28:11.092 16:10:22 -- common/autotest_common.sh@941 -- # uname 00:28:11.092 16:10:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:11.092 16:10:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79667 00:28:11.092 killing process with pid 79667 00:28:11.092 16:10:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:11.092 16:10:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:11.092 16:10:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79667' 00:28:11.092 16:10:22 -- common/autotest_common.sh@955 -- # kill 79667 00:28:11.092 16:10:22 -- common/autotest_common.sh@960 -- # wait 79667 00:28:12.552 16:10:23 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:12.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:12.552 Waiting for block devices as requested 00:28:12.552 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:28:12.552 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:28:12.552 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:12.814 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:28:18.102 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:28:18.102 16:10:29 -- ftl/ftl.sh@28 -- # remove_shm 00:28:18.102 Remove shared memory files 00:28:18.102 16:10:29 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:18.102 16:10:29 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:18.102 16:10:29 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:18.102 16:10:29 -- ftl/common.sh@207 -- # rm -f rm -f 00:28:18.102 16:10:29 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:18.102 16:10:29 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:18.102 00:28:18.102 real 13m30.408s 00:28:18.102 user 15m30.440s 00:28:18.102 sys 1m18.695s 00:28:18.102 16:10:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:18.102 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:28:18.102 ************************************ 00:28:18.102 END TEST ftl 00:28:18.102 ************************************ 00:28:18.102 16:10:29 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:28:18.102 16:10:29 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:28:18.102 16:10:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:18.102 16:10:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:18.103 16:10:29 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:28:18.103 16:10:29 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:28:18.103 16:10:29 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:28:18.103 16:10:29 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:28:18.103 16:10:29 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:28:18.103 16:10:29 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:28:18.103 16:10:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:18.103 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:28:18.103 16:10:29 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:28:18.103 16:10:29 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:28:18.103 16:10:29 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:28:18.103 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:28:19.045 INFO: APP EXITING 00:28:19.045 INFO: killing all VMs 00:28:19.045 INFO: killing vhost app 00:28:19.045 INFO: EXIT DONE 00:28:19.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:19.990 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:28:19.990 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:28:19.990 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:28:19.990 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:28:20.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:20.563 Cleaning 00:28:20.564 Removing: /var/run/dpdk/spdk0/config 00:28:20.826 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:20.826 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:20.826 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:20.826 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:20.826 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:20.826 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:20.826 Removing: /var/run/dpdk/spdk0 00:28:20.826 Removing: /var/run/dpdk/spdk_pid55953 00:28:20.826 Removing: /var/run/dpdk/spdk_pid56149 00:28:20.826 Removing: /var/run/dpdk/spdk_pid56454 00:28:20.826 Removing: /var/run/dpdk/spdk_pid56547 00:28:20.826 Removing: /var/run/dpdk/spdk_pid56631 00:28:20.826 Removing: /var/run/dpdk/spdk_pid56743 00:28:20.826 Removing: /var/run/dpdk/spdk_pid56852 00:28:20.826 Removing: /var/run/dpdk/spdk_pid56886 00:28:20.826 Removing: /var/run/dpdk/spdk_pid56928 00:28:20.826 Removing: /var/run/dpdk/spdk_pid56992 00:28:20.826 Removing: /var/run/dpdk/spdk_pid57087 00:28:20.826 Removing: /var/run/dpdk/spdk_pid57511 00:28:20.826 Removing: /var/run/dpdk/spdk_pid57564 00:28:20.826 Removing: /var/run/dpdk/spdk_pid57621 00:28:20.826 Removing: /var/run/dpdk/spdk_pid57632 00:28:20.826 Removing: /var/run/dpdk/spdk_pid57731 00:28:20.826 Removing: /var/run/dpdk/spdk_pid57747 00:28:20.826 Removing: /var/run/dpdk/spdk_pid57844 00:28:20.826 Removing: /var/run/dpdk/spdk_pid57856 00:28:20.826 Removing: /var/run/dpdk/spdk_pid57909 00:28:20.826 Removing: /var/run/dpdk/spdk_pid57927 00:28:20.826 Removing: /var/run/dpdk/spdk_pid57980 00:28:20.826 Removing: /var/run/dpdk/spdk_pid57998 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58161 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58203 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58280 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58350 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58381 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58448 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58473 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58510 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58536 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58571 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58602 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58644 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58670 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58710 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58731 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58772 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58798 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58839 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58860 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58901 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58929 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58970 00:28:20.826 Removing: /var/run/dpdk/spdk_pid58996 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59038 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59058 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59100 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59122 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59163 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59189 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59230 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59256 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59297 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59323 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59359 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59385 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59420 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59452 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59493 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59522 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59566 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59595 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59639 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59665 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59706 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59735 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59777 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59855 00:28:20.826 Removing: /var/run/dpdk/spdk_pid59967 00:28:20.826 Removing: /var/run/dpdk/spdk_pid60131 00:28:20.826 Removing: /var/run/dpdk/spdk_pid60210 00:28:20.826 Removing: /var/run/dpdk/spdk_pid60241 00:28:20.826 Removing: /var/run/dpdk/spdk_pid60686 00:28:20.827 Removing: /var/run/dpdk/spdk_pid60892 00:28:20.827 Removing: /var/run/dpdk/spdk_pid61013 00:28:20.827 Removing: /var/run/dpdk/spdk_pid61055 00:28:20.827 Removing: /var/run/dpdk/spdk_pid61086 00:28:20.827 Removing: /var/run/dpdk/spdk_pid61169 00:28:20.827 Removing: /var/run/dpdk/spdk_pid61820 00:28:20.827 Removing: /var/run/dpdk/spdk_pid61858 00:28:20.827 Removing: /var/run/dpdk/spdk_pid62344 00:28:20.827 Removing: /var/run/dpdk/spdk_pid62456 00:28:20.827 Removing: /var/run/dpdk/spdk_pid62559 00:28:20.827 Removing: /var/run/dpdk/spdk_pid62607 00:28:20.827 Removing: /var/run/dpdk/spdk_pid62638 00:28:20.827 Removing: /var/run/dpdk/spdk_pid62663 00:28:20.827 Removing: /var/run/dpdk/spdk_pid64587 00:28:20.827 Removing: /var/run/dpdk/spdk_pid64726 00:28:20.827 Removing: /var/run/dpdk/spdk_pid64730 00:28:20.827 Removing: /var/run/dpdk/spdk_pid64742 00:28:20.827 Removing: /var/run/dpdk/spdk_pid64800 00:28:20.827 Removing: /var/run/dpdk/spdk_pid64810 00:28:20.827 Removing: /var/run/dpdk/spdk_pid64822 00:28:20.827 Removing: /var/run/dpdk/spdk_pid64889 00:28:20.827 Removing: /var/run/dpdk/spdk_pid64898 00:28:21.088 Removing: /var/run/dpdk/spdk_pid64910 00:28:21.088 Removing: /var/run/dpdk/spdk_pid64972 00:28:21.088 Removing: /var/run/dpdk/spdk_pid64976 00:28:21.088 Removing: /var/run/dpdk/spdk_pid64988 00:28:21.088 Removing: /var/run/dpdk/spdk_pid66437 00:28:21.088 Removing: /var/run/dpdk/spdk_pid66541 00:28:21.088 Removing: /var/run/dpdk/spdk_pid66686 00:28:21.088 Removing: /var/run/dpdk/spdk_pid66773 00:28:21.088 Removing: /var/run/dpdk/spdk_pid66851 00:28:21.088 Removing: /var/run/dpdk/spdk_pid66927 00:28:21.088 Removing: /var/run/dpdk/spdk_pid67024 00:28:21.088 Removing: /var/run/dpdk/spdk_pid67099 00:28:21.088 Removing: /var/run/dpdk/spdk_pid67246 00:28:21.088 Removing: /var/run/dpdk/spdk_pid67626 00:28:21.088 Removing: /var/run/dpdk/spdk_pid67657 00:28:21.088 Removing: /var/run/dpdk/spdk_pid68097 00:28:21.088 Removing: /var/run/dpdk/spdk_pid68285 00:28:21.088 Removing: /var/run/dpdk/spdk_pid68391 00:28:21.089 Removing: /var/run/dpdk/spdk_pid68495 00:28:21.089 Removing: /var/run/dpdk/spdk_pid68548 00:28:21.089 Removing: /var/run/dpdk/spdk_pid68578 00:28:21.089 Removing: /var/run/dpdk/spdk_pid68889 00:28:21.089 Removing: /var/run/dpdk/spdk_pid68951 00:28:21.089 Removing: /var/run/dpdk/spdk_pid69026 00:28:21.089 Removing: /var/run/dpdk/spdk_pid69425 00:28:21.089 Removing: /var/run/dpdk/spdk_pid69575 00:28:21.089 Removing: /var/run/dpdk/spdk_pid70386 00:28:21.089 Removing: /var/run/dpdk/spdk_pid70511 00:28:21.089 Removing: /var/run/dpdk/spdk_pid70688 00:28:21.089 Removing: /var/run/dpdk/spdk_pid70791 00:28:21.089 Removing: /var/run/dpdk/spdk_pid71082 00:28:21.089 Removing: /var/run/dpdk/spdk_pid71387 00:28:21.089 Removing: /var/run/dpdk/spdk_pid71758 00:28:21.089 Removing: /var/run/dpdk/spdk_pid71988 00:28:21.089 Removing: /var/run/dpdk/spdk_pid72193 00:28:21.089 Removing: /var/run/dpdk/spdk_pid72240 00:28:21.089 Removing: /var/run/dpdk/spdk_pid72453 00:28:21.089 Removing: /var/run/dpdk/spdk_pid72474 00:28:21.089 Removing: /var/run/dpdk/spdk_pid72522 00:28:21.089 Removing: /var/run/dpdk/spdk_pid72735 00:28:21.089 Removing: /var/run/dpdk/spdk_pid72973 00:28:21.089 Removing: /var/run/dpdk/spdk_pid73609 00:28:21.089 Removing: /var/run/dpdk/spdk_pid74392 00:28:21.089 Removing: /var/run/dpdk/spdk_pid75093 00:28:21.089 Removing: /var/run/dpdk/spdk_pid75877 00:28:21.089 Removing: /var/run/dpdk/spdk_pid76023 00:28:21.089 Removing: /var/run/dpdk/spdk_pid76109 00:28:21.089 Removing: /var/run/dpdk/spdk_pid76466 00:28:21.089 Removing: /var/run/dpdk/spdk_pid76522 00:28:21.089 Removing: /var/run/dpdk/spdk_pid77323 00:28:21.089 Removing: /var/run/dpdk/spdk_pid77865 00:28:21.089 Removing: /var/run/dpdk/spdk_pid78661 00:28:21.089 Removing: /var/run/dpdk/spdk_pid78787 00:28:21.089 Removing: /var/run/dpdk/spdk_pid78842 00:28:21.089 Removing: /var/run/dpdk/spdk_pid78902 00:28:21.089 Removing: /var/run/dpdk/spdk_pid78952 00:28:21.089 Removing: /var/run/dpdk/spdk_pid79017 00:28:21.089 Removing: /var/run/dpdk/spdk_pid79235 00:28:21.089 Removing: /var/run/dpdk/spdk_pid79287 00:28:21.089 Removing: /var/run/dpdk/spdk_pid79348 00:28:21.089 Removing: /var/run/dpdk/spdk_pid79410 00:28:21.089 Removing: /var/run/dpdk/spdk_pid79454 00:28:21.089 Removing: /var/run/dpdk/spdk_pid79524 00:28:21.089 Removing: /var/run/dpdk/spdk_pid79667 00:28:21.089 Clean 00:28:21.350 killing process with pid 48159 00:28:21.350 killing process with pid 48164 00:28:21.350 16:10:32 -- common/autotest_common.sh@1446 -- # return 0 00:28:21.350 16:10:32 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:28:21.350 16:10:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:21.350 16:10:32 -- common/autotest_common.sh@10 -- # set +x 00:28:21.350 16:10:32 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:28:21.350 16:10:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:21.350 16:10:32 -- common/autotest_common.sh@10 -- # set +x 00:28:21.350 16:10:32 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:21.350 16:10:32 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:21.350 16:10:32 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:21.350 16:10:32 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:28:21.350 16:10:32 -- spdk/autotest.sh@383 -- # hostname 00:28:21.350 16:10:32 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:21.611 geninfo: WARNING: invalid characters removed from testname! 00:28:48.224 16:10:56 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:48.224 16:10:59 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:50.776 16:11:01 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:52.154 16:11:03 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:54.700 16:11:05 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:56.079 16:11:07 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:58.629 16:11:09 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:58.629 16:11:09 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:28:58.629 16:11:09 -- common/autotest_common.sh@1690 -- $ lcov --version 00:28:58.629 16:11:09 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:28:58.629 16:11:10 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:28:58.629 16:11:10 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:28:58.629 16:11:10 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:28:58.629 16:11:10 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:28:58.629 16:11:10 -- scripts/common.sh@335 -- $ IFS=.-: 00:28:58.629 16:11:10 -- scripts/common.sh@335 -- $ read -ra ver1 00:28:58.629 16:11:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:58.629 16:11:10 -- scripts/common.sh@336 -- $ read -ra ver2 00:28:58.629 16:11:10 -- scripts/common.sh@337 -- $ local 'op=<' 00:28:58.629 16:11:10 -- scripts/common.sh@339 -- $ ver1_l=2 00:28:58.629 16:11:10 -- scripts/common.sh@340 -- $ ver2_l=1 00:28:58.629 16:11:10 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:28:58.629 16:11:10 -- scripts/common.sh@343 -- $ case "$op" in 00:28:58.629 16:11:10 -- scripts/common.sh@344 -- $ : 1 00:28:58.629 16:11:10 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:28:58.629 16:11:10 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.629 16:11:10 -- scripts/common.sh@364 -- $ decimal 1 00:28:58.629 16:11:10 -- scripts/common.sh@352 -- $ local d=1 00:28:58.629 16:11:10 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:58.629 16:11:10 -- scripts/common.sh@354 -- $ echo 1 00:28:58.629 16:11:10 -- scripts/common.sh@364 -- $ ver1[v]=1 00:28:58.629 16:11:10 -- scripts/common.sh@365 -- $ decimal 2 00:28:58.629 16:11:10 -- scripts/common.sh@352 -- $ local d=2 00:28:58.629 16:11:10 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:58.629 16:11:10 -- scripts/common.sh@354 -- $ echo 2 00:28:58.629 16:11:10 -- scripts/common.sh@365 -- $ ver2[v]=2 00:28:58.629 16:11:10 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:28:58.629 16:11:10 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:28:58.629 16:11:10 -- scripts/common.sh@367 -- $ return 0 00:28:58.629 16:11:10 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.629 16:11:10 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:28:58.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.629 --rc genhtml_branch_coverage=1 00:28:58.629 --rc genhtml_function_coverage=1 00:28:58.629 --rc genhtml_legend=1 00:28:58.629 --rc geninfo_all_blocks=1 00:28:58.629 --rc geninfo_unexecuted_blocks=1 00:28:58.629 00:28:58.629 ' 00:28:58.629 16:11:10 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:28:58.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.629 --rc genhtml_branch_coverage=1 00:28:58.629 --rc genhtml_function_coverage=1 00:28:58.629 --rc genhtml_legend=1 00:28:58.629 --rc geninfo_all_blocks=1 00:28:58.629 --rc geninfo_unexecuted_blocks=1 00:28:58.629 00:28:58.629 ' 00:28:58.629 16:11:10 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:28:58.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.629 --rc genhtml_branch_coverage=1 00:28:58.629 --rc genhtml_function_coverage=1 00:28:58.629 --rc genhtml_legend=1 00:28:58.629 --rc geninfo_all_blocks=1 00:28:58.629 --rc geninfo_unexecuted_blocks=1 00:28:58.629 00:28:58.629 ' 00:28:58.629 16:11:10 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:28:58.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.629 --rc genhtml_branch_coverage=1 00:28:58.629 --rc genhtml_function_coverage=1 00:28:58.629 --rc genhtml_legend=1 00:28:58.629 --rc geninfo_all_blocks=1 00:28:58.629 --rc geninfo_unexecuted_blocks=1 00:28:58.629 00:28:58.629 ' 00:28:58.629 16:11:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:58.629 16:11:10 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:58.629 16:11:10 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.629 16:11:10 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.629 16:11:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.629 16:11:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.629 16:11:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.629 16:11:10 -- paths/export.sh@5 -- $ export PATH 00:28:58.629 16:11:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.629 16:11:10 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:58.629 16:11:10 -- common/autobuild_common.sh@440 -- $ date +%s 00:28:58.629 16:11:10 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732896670.XXXXXX 00:28:58.629 16:11:10 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732896670.fTlSDY 00:28:58.629 16:11:10 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:28:58.629 16:11:10 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:28:58.629 16:11:10 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:58.629 16:11:10 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:58.629 16:11:10 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:58.629 16:11:10 -- common/autobuild_common.sh@456 -- $ get_config_params 00:28:58.629 16:11:10 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:28:58.629 16:11:10 -- common/autotest_common.sh@10 -- $ set +x 00:28:58.629 16:11:10 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:58.629 16:11:10 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:58.629 16:11:10 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:58.629 16:11:10 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:58.629 16:11:10 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:58.629 16:11:10 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:58.629 16:11:10 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:58.629 16:11:10 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:58.629 16:11:10 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:58.629 16:11:10 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:58.890 16:11:10 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:58.890 + [[ -n 4983 ]] 00:28:58.890 + sudo kill 4983 00:28:58.900 [Pipeline] } 00:28:58.916 [Pipeline] // timeout 00:28:58.921 [Pipeline] } 00:28:58.935 [Pipeline] // stage 00:28:58.942 [Pipeline] } 00:28:58.955 [Pipeline] // catchError 00:28:58.964 [Pipeline] stage 00:28:58.967 [Pipeline] { (Stop VM) 00:28:58.978 [Pipeline] sh 00:28:59.275 + vagrant halt 00:29:01.824 ==> default: Halting domain... 00:29:08.427 [Pipeline] sh 00:29:08.711 + vagrant destroy -f 00:29:11.262 ==> default: Removing domain... 00:29:11.848 [Pipeline] sh 00:29:12.134 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:12.146 [Pipeline] } 00:29:12.161 [Pipeline] // stage 00:29:12.166 [Pipeline] } 00:29:12.181 [Pipeline] // dir 00:29:12.186 [Pipeline] } 00:29:12.201 [Pipeline] // wrap 00:29:12.208 [Pipeline] } 00:29:12.221 [Pipeline] // catchError 00:29:12.230 [Pipeline] stage 00:29:12.233 [Pipeline] { (Epilogue) 00:29:12.248 [Pipeline] sh 00:29:12.623 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:16.829 [Pipeline] catchError 00:29:16.832 [Pipeline] { 00:29:16.845 [Pipeline] sh 00:29:17.131 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:17.131 Artifacts sizes are good 00:29:17.142 [Pipeline] } 00:29:17.156 [Pipeline] // catchError 00:29:17.168 [Pipeline] archiveArtifacts 00:29:17.176 Archiving artifacts 00:29:17.292 [Pipeline] cleanWs 00:29:17.305 [WS-CLEANUP] Deleting project workspace... 00:29:17.305 [WS-CLEANUP] Deferred wipeout is used... 00:29:17.312 [WS-CLEANUP] done 00:29:17.314 [Pipeline] } 00:29:17.330 [Pipeline] // stage 00:29:17.336 [Pipeline] } 00:29:17.350 [Pipeline] // node 00:29:17.356 [Pipeline] End of Pipeline 00:29:17.395 Finished: SUCCESS